1 s2.0 S0045782523001160 Main
1 s2.0 S0045782523001160 Main
com
ScienceDirect
Abstract
The main aim of this work is to present a numerical analysis of convergence and accuracy of the selected generalized
perturbation-based schemes in linear and nonlinear problems of solid mechanics. An algorithm for determining the basic
probabilistic characteristics has been developed using the iterative generalized stochastic higher-order perturbation method
adjacent to symmetrically truncated Gaussian random variables. It has been confirmed that usage of a sufficiently high order of
the Truncated Iterative Stochastic Perturbation Technique (TISPT) allows for achieving any desired accuracy in determining up
to the fourth-order probabilistic characteristics of static structural response. The semi-analytical probabilistic approach is the
reference solution in this study, which is based upon the same composite response functions determined with the use of the Least
Squares Method created using specific series of FEM experiments. The entire methodology has been provided for the given
extreme value of the coefficient of variation αmax of the input uncertainty source. On the other hand, a selection procedure of
the stochastic perturbation method order to achieve 1% numerical accuracy in all up to the fourth-order probabilistic moments
has been proposed. Computational experiments include simply supported elastic Euler–Bernoulli beam, a set of steel diagrid
structures, nonlinear tension of steel round bar as well as homogenization procedure of some particulate composite.
© 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
Keywords: Uncertainty analysis; Stochastic perturbation technique; Error estimation; Probabilistic convergence; Stochastic Finite Element
Method; Probabilistic semi-analytical method
1. Introduction
Truncated random variables are fundamental for engineering design, reliability assessment, and a solution to
most physical problems. Many of the mechanical and physical parameters of fluids, solids, gases, and also their
mixtures exhibit naturally non-negative values. On the other hand, geometrical imperfections in civil, and especially
mechanical and aeronautical engineering, may be treated as random, but they are clearly bounded. Therefore, the
delivery of probabilistic analysis in all these cases should include probabilistic integrals rewritten for truncated (one
or two-side bounded) random variables or fields [1–3]. This problem is of paramount importance when handling
Gaussian uncertainties, where the truncation effect may cause remarkable modeling errors, whose magnitudes
∗ Corresponding author.
E-mail address: [email protected] (M. Kamiński).
https://doi.org/10.1016/j.cma.2023.115993
0045-7825/© 2023 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http:
//creativecommons.org/licenses/by-nc-nd/4.0/).
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
may be determined even in some analytical way. Uncertainty truncation analysis is very exceptional even in
mathematical textbooks and is a very complex problem in various stochastic computer methods. Relatively easy
would be its implementation in the Monte-Carlo simulation (MCS) scheme only, where a simple selection procedure
during random quantities generation is sufficient to create the entire population, which is finally processed by
the same statistical estimators as in case of no truncation. It seems also that analytical and some semi-analytical
approaches could be successful assuming that symbolic integration inherent in probabilistic calculus for one or
two-side bounded domains could be carried out. Other most popular stochastic numerical methods like the first,
the second, or general-order perturbation techniques [4–9], von Neumann [10], polynomial chaos or Karhunen–
Loeve expansions [11], Kriging [12] or homotopy-based strategies [13,14], as well as even some probability density
function (PDF) transformation methods, practically do not have now their modifications towards any truncations;
they undoubtedly should affect the final equations describing probabilistic moments of the analyzed system linear,
nonlinear or transient responses. On the other hand, numerical error estimation in the Stochastic Finite Element
Method (SFEM) even with no truncation of the input uncertainties is still a rather challenging problem. Modeling
error while using the Finite Element Method (FEM) itself is rather quite well explored, cf. [15], for instance, but
its stochastic version usage in various stochastic problems demands more attention, where engineering analyses are
rather exceptional [16–19]. Numerical error is frequently identified in the context of stochastic problems with the
MCS statistical convergence of the resulting probabilistic moments & characteristics with an increasing number of
random samples. All the expansions-based (Taylor or K–L series) approaches are parametrized with the expansion
order and their accuracy should be discussed in the context of both probabilistic moments convergence (limits
existence and their differences to statistical estimators, for instance) and the truncation effect influence. This is the
key issue studies in this work.
Furthermore, it is well known that a solution to the structural problem including randomness can be accompanied
by a verification of the reliability indices calculated, and one of the first approaches has been proposed by
Cornell [20]. This index called the First Order Reliability Method (FORM) approach was based upon a simple limit
state function g defined as the difference between structural responses f (b) and their given thresholds fmax . They
have been defined using the expected values and the variances of these two functions. However, the reliability can be
alternatively described by indicators, which take into account some additional parameters describing both structural
resistance or structural effort probability distributions [21–25] (like skewness, for instance). So not only the first
two moments but also higher-order probabilistic characteristics would be desired to perform an efficient reliability
assessment in engineering practice. Therefore, an efficient algorithm of the basic probabilistic characteristics
determination is proposed here using the generalized iterative stochastic fourth-moment perturbation method (of any
order) implemented for truncated Gaussian design uncertainties and contrasted with the semi-analytical approach. Its
basic features are: (i) relatively short computational time, (ii) no need for massive computers to provide the analysis,
and also (iii) relatively easy implementation into different existing FEM computer systems [26]. One may apply
also polynomial chaos [11], Monte-Carlo simulation [27], approximated principal deformation modes [28], or/and
Probability Transformation Method [29–31], for instance. Alternatively, it is possible to use the Latin Hypercube
Sampling, fuzzy sets theory, or Bayesian approach, but a calculation of higher-order statistics with truncated random
input connected with numerical error analysis could be a very difficult task.
Independently of the probabilistic method choice, the structural responses—concerning some input parameter
subjected to uncertain fluctuations—are always required. Their determination is traditionally assisted by the
global version of the Response Function Method (RFM) [32,33], which is quite similar to the Response Surface
Methods (RSM) applicable frequently in reliability analysis [34–37]. It is necessary to determine such a function
using multiple solutions of the given boundary, the initial value problem about the expectation of the random
parameter to complete this task. As for other stochastic methods, the resulting discrete values are obtained
traditionally from the Finite Element Method experiments series [38–42], but also can come from the Finite
Difference Method [43,44], the Finite Volume Method [45], the Boundary Element Method [46], or the iso-geometric
analysis [47–50]. The structural response issue has been studied here with the use of 144 different polynomial-
based functions. They were given some predefined analytical forms of any mathematical complexity, free from
the RFM error, and can be used to examine the numerical convergence and accuracy in determining probabilistic
moments of the schemes from the perturbation methods family in comparison with the direct symbolic integration
approach for various theoretical structural behavior what is the main aim of this work. This way was chosen to
make validation completely free from the statistical estimation errors inherent in various implementations of the
2
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Monte-Carlo simulation. The integration process according to probability theory definitions is limited here from
the numerical point of view by using the three-sigma rule so that the random variable is analyzed as truncated.
Therefore, the truncated equivalents (TSPT and TISPT) [51] for the classical elasticized (SPT) and iterative (ISPT)
perturbation-based schemes were used in this work to increase the efficiency while compared to the direct symbolic
integration approach. Their influence on the results was compared numerically. It was decided to use the maximum
relative error among these obtained for the input coefficients of variation αi ∈ {0.0125, 0.025, . . . , αmax } as a measure
of the accuracy for the given perturbation-based scheme and its given order. The final goal was to find the most
accurate perturbation-based scheme and a quick way to establish its order, above which satisfactory correctness is
achieved. Its threshold was assumed at 1%. Thanks to the conclusions, it will be possible to obtain the satisfactorily
accurate, continuous dependence of the probabilistic characteristics on the input coefficient of variation (instead of
operating only on a limited number of discrete data) as they can be used in further reliability calculations.
The problem of the perturbation-based approaches’ efficiency and accuracy was examined in the case of the
classical linearized perturbation method (SPT), where for certain order polynomial responses it is just enough to
apply the approach with only a single order higher Taylor expansion substituted into the subsequent formulas [43].
The 10th-order approach is usually sufficient to determine accurately the first two probabilistic moments [52]
but the determination of the next two moments demands higher-order expansions and/or application of the
iterative calculation of the resulting probabilistic moments (ISPT) [53]. Polynomial responses are commonly
applied [35,37,54–59] and this idea follows some other applications in computational practice [60,61]. However,
such approximations do not always provide satisfactory accuracy. Moreover, their application to describe the extreme
magnitudes, for example, of structures seems unfortunate because polynomial limits at infinity are always infinite.
In one of them, responses should tend to 0 instead as their values are only positive. Therefore, in this paper, the
numerical convergence and accuracy of the perturbation methods were analyzed for composite functions based on
polynomial w—hereinafter called the base reference polynomials—which have a different algebraic structure. They
are more flexible—due to the wider range of available functions—and can be further expanded into the Taylor
series—as the compositions of elementary functions are analytical—in the perturbation technique, for instance. In
turn, the different polynomial basis should help in capturing nonlinearity between the structural responses and the
basic random variables. The problem of composite functions convergence was partially done in [51], but using
only three dependencies as reference response functions (radical, linear, and inverse), which additionally simplifies
to a linear function. Therefore, 48 groups of functions based on first, second, and fifth-order polynomials were
considered here to obtain also the appropriate level of their complexity.
Numerical illustrations attached in this work include some practical engineering examples recalled after [62]:
(i) classical elastic uniformly loaded simply supported beam (necessary for the limit states validation), (ii) the
set of eight steel diagrid grillages basic eigenfrequencies, the extreme reduced stresses, extreme global vertical
displacements as well as extreme local deflections, (iii) nonlinear static analysis of the tensioned steel bar as well as
(iv) homogenization of the particle-reinforced composite with linear elastic components and almost incompressible
particle. Uncertainty analysis in these cases is based upon randomization of (i) some geometrical parameters like the
length of a structural element or the particle size, (ii) cross-sections dimensions—wall thicknesses as well as (iii)
material characteristics—Young’s modulus of the steel elements. The Least Squares Method procedures, analytical
integration in probabilistic calculus, higher-order differentiation inherent in Taylor expansions, and also some
symbolic derivation processes have been programmed and completed in the computer algebra system MAPLE [63].
2. Theoretical background
2.1. Direct symbolic integration approach
In the case of some real function f (b) of the stationary input random variable b having Gaussian probability
density function p(b) with σ (b) ≡ σ denoting its standard deviation, the expected value E[b] = b0 and input
coefficient of variation α (b) ≡ α, the following classical definitions for the basic probabilistic moments and
coefficients of this function are introduced [64–67]:
- the expectation
∫ +∞
E [ f (b)] = f (b) · p (b) db, (1)
−∞
3
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
- the variance
∫ +∞
V ar ( f (b)) = ( f (b) − E [ f (b)])2 · p (b) db, (2)
−∞
The basic idea of the perturbation-based approaches is an expansion of all random functions of the given problem
via the Taylor series in the neighborhood D of their random variable expectations [43,69]:
∞ (i)
( 0)
i f b
( 0) ∑
f (b) = f b + ε (∆b)i , (5)
i=1
i!
where f (i) (b0 ) is the value of the ith order derivative of the function f at the point b0 , the quantity ∆b = b − b0
is the first order variation of the variable b and ε > 0 is the perturbation parameter [41,70]. However, it makes a
difference only in the Direct Differentiation Method (DDM), while here the Response Function Method was used.
Therefore, the value of ε was arbitrarily assumed to be 1.0 [41,43], so it vanishes in further formulas. It is clear
from Eq. (5) that it can be valid for functions infinitely differentiable in neighborhood D only. With the above
assumption, expansion into the Taylor series, cf. Eq. (5), is accurate enough for a given b ∈ D if the sequence of
residuals
Rn = f (b) − Sn (6)
converges to zero for n → ∞; Sn is the sum of the first n terms in these Taylor series. Inside the interval where this
condition is met, it is possible to integrate and differentiate the series a term-by-term. From the numerical point of
view, this expansion is carried out for the summation over the finite number of components. It yields (the nth-order
Taylor expansion):
n
∑
f (b) ≈ a0 + ai (∆b)i = Sn , (7)
i=1
4
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
where the following notation for the coefficients ai was introduced here to shorten the notation:
⎧ ( 0)
⎨ f b when i = 0
ai = f (i) b0 (8)
( )
⎩ when i = 1, 2, . . . , n.
i!
An additional benefit is the reduction of the requirement for the function f. It is now sufficient that it is only
(n+1)-times continuously differentiable in the given neighborhood D of the mean value. The above expansion,
substituted into the classical integral definitions of the first four probabilistic moments, leads to the formulas of the
stochastic perturbation method. In the case of the expected value it yields:
∫ +∞ ( ∑n
)
∑n
E [ f (b)] ≈ a0 + ai (∆b) · p (b) db = a0 µ0 (b) +
i
ai µi (b) , (9)
−∞ i=1 i=1
where µi (b) denotes ith order central probabilistic moment of a random variable b without limits, which for the
Gaussian distribution is defined as [43]:
⎧
∫ +∞ ⎪
⎪ 1 when i = 0
⎨
i
µi ≡ µi (b) = b − b0 · p (b) db = (i − 1)!! (σ (b))i when 2|i ,
( )
(10)
−∞ ⎪
⎪
0 otherwise
⎩
Let us consider the input random variable with symmetric probability density function, and also symmetrical for
b0 the integration limits. One can show that the expectation of the structural response function of course remains
the same in both linearized and the nonlinear iterative schemes in the 2nth order perturbation approach [53]:
n
∑
E [ f (b)] = a0 µ0 + a2i µ2i . (11)
i=1
The difference occurs only in the case of the central moments from the second upwards. When we substitute
only the first term from Eq. (11) into their integral definitions, Eqs. (1)–(3), we deal with the stochastic perturbation
technique (SPT), but including a larger number of terms during the derivation of higher-order moments is the basis
of the iterative stochastic perturbation technique (ISPT). Numerical experiments included in [51] have confirmed
that their maximum possible number should be then used. Correspondingly, when we consider a random variable
with symmetric lower and upper truncations of the probability density function we obtain the truncated equivalents
of the classical linearized and iterative perturbation-based schemes (TSPT and TISPT) [51]. It comes down to the
use of modified formulas for the ith central moment of a random variable b, which in the case of the expected
value of the 2nth order gives (n+1 terms):
n
∑
2n
E TISPT = 2n
E TSPT = a0 µ0 + a2i µ2i , (12)
i=1
where
∫ b0 +k·σ (b)
)i
µi ≡ µi (b) = b − b0 · p (b) db
(
b0 −k·σ (b)
⎧ ( )
k
er f √ when i = 0
⎪
⎪
⎪
⎪
⎪
⎪
⎪ ⎡ 2 ⎤
⎨ ) √ i/2 2 j−1
,
(
= k 2 −k 2 ∑ k (13)
(i − 1)!! (σ (b))i · ⎣er f √ − ·e 2 · ⎦ when 2|i
π (2
⎪
⎪
⎪
⎪ 2 j=1
j − 1)!!
⎪
⎪
⎪
0 otherwise
⎩
and k = 3 is adopted according to the so-called three-sigma rule. When the truncations of the probability density
function are not symmetric around the mean value b0 —for example, the random variable takes only positive
values—in Eq. (12) we need to consider also odd-order terms as they do not simply vanish even for the symmetric
5
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
distribution:
∞(
∫
)i
µi ≡ µi (b) = b − b0 · p (b) db
⎧ 0
b0
[ ( ) ]
⎪ 1
⎪
⎪ er f √ + 1 when i = 0
2 σ (b) 2
⎪
⎪
⎪
⎪
⎪
⎪ ⎡ ( 0 )2 j−1 ⎤
(b0 )
⎪ √ 2 i/2 b
0
⎪ ( )
⎨ (i − 1)!! b 2 − 2(σ (b))2 σ (b)
⎪ ∑
(σ (b)) · ⎣er f
i ⎢
⎪
√ +1− ·e · when 2|i
⎥
= 2 σ (b) 2 π (2 j − 1)!! . (14)
⎦
⎪ j=1
⎪
⎪
⎪
⎪ ( 0 )2 j−2
(b0 )
⎪
⎪ √ 2 (i+1)/2 b
⎪
⎪ (i − 1)!! 2 − ∑ σ (b)
(σ (b))i
· · e 2(σ (b))2 · otherwise
⎪
⎪
π (2 j − 2)!!
⎪
⎪
⎩ 2 j=1
When the TISPT scheme is concerned, the 4nth order expression for the variance was introduced in [51] as
follows (3n 2 + n + 1 terms):
n
[ n ]2
)2 )2 ∑ ) ∑
V arTISPT = (a0 ) µ0 1 − µ0 + 2a0 1 − µ0
4n 2
a2i µ2i + µ0 − 2 a2i µ2i
( ( (
i=1 i=1
(15)
n ∑
∑ n n ∑
∑ n
+ a2i a2 j µ2i+2 j + a2i−1 a2 j−1 µ2i+2 j−2 .
i=1 j=1 i=1 j=1
It simplifies if we do not need to consider truncations of the probability density function (ISPT):
n ∑n n
n ∑
[ n ]2
∑ ∑ ∑
4n
V arISPT = a2i a2 j µ2i+2 j + a2i−1 a2 j−1 µ2i+2 j−2 − a2i µ2i , (16)
i=1 j=1 i=1 j=1 i=1
where the last expression indicates the use of the iterative approach.
Here also the formulas for higher probabilistic moments were derived manually with the support of symbolic
calculations of the computer algebra system MAPLE. The third central probabilistic moment of the 6nth order in
the TISPT scheme was obtained by substituting the 2nth order Taylor expansion and the 2nth order expected value
into the integral definition from Eq. (3) with modified integration limits:
∫ b0 +k·σ (b)
µ6n
3,TISPT = ( f (b) − E [ f (b)])3 · p (b) db
b0 −k·σ (b)
)3 (17)
b0 +k·σ (b)
∫ ( 2n n
∑ ∑
= a0 + ai (∆b)i − a0 µ0 − a2i µ2i · p (b) db.
b0 −k·σ (b) i=1 i=1
After splitting the first sum into two: of even and odd terms, raising the expression in parentheses to power 3,
integration, and excluding odd central moments of the random variable, Eq. (17) can be reduced to the following
form (5n3 +3n2 +n+1 terms):
n
)3 )3 ∑
µ6n (a )3
µ µ (a )2
µ a2i µ2i
( (
3,TISPT = 0 0 1 − 0 + 3 0 1 − 0
⎡ i=1 ⎤
n ∑ n n ∑ n
) ∑ ∑
+3a0 1 − µ0 ⎣ a2i a2 j µ2i+2 j + a2i−1 a2 j−1 µ2i+2 j−2 ⎦
(
i=1 i=1
6
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
⎡ ⎤
n
∑ n ∑
∑ n n ∑
∑ n
−3 · a2i µ2i · ⎣ a2i a2 j µ2i+2 j + a2i−1 a2 j−1 µ2i+2 j−2 ⎦
i=1 i=1 j=1 i=1 j=1
∑n ∑ n ∑ n ∑n ∑n ∑
n (18)
+3 · a2i−1 a2 j−1 a2m µ2i+2 j+2m−2 + a2i a2 j a2m µ2i+2 j+2m .
i=1 j=1 m=1 i=1 j=1 m=1
However, it again simplifies if we do not need to ⎡ consider truncations of the probability density function
⎤ (ISPT):
[ n ]3 n n n n n
∑ ∑ ∑∑ ∑∑
µ6n
3,ISPT = 2 a2i µ2i − 3 · a2i µ2i · ⎣ a2i a2 j µ2i+2 j + a2i−1 a2 j−1 µ2i+2 j−2 ⎦
i=1 i=1 i=1 j=1 i=1 j=1
∑n ∑
n ∑
n n ∑
∑ n ∑
n (19)
+3 · a2i−1 a2 j−1 a2m µ2i+2 j+2m−2 + a2i a2 j a2m µ2i+2 j+2m ,
i=1 j=1 m=1 i=1 j=1 m=1
where the first two components indicate the use of the iterative approach. Finally, the fourth central probabilistic
moment of the order 8nth in the TISPT scheme was obtained here analogically (8n4 +5n3 +3n2 +n+1 terms):
n
)4 )4 ∑
µ8n (a )4
µ µ (a )3
µ a2i µ2i
( (
4,TISPT = 0 0 1 − 0 + 4 0 1 − 0
i=1
n
[ ]2
)2 ( ) ∑
+6 (a0 ) 1 − µ0 µ0 − 2
2
a2i µ2i
(
i=1
⎡ ⎤
n ∑
n n ∑
n
2
∑ ∑
+6 (a0 )2 1 − µ0 ⎣ a2i a2 j µ2i+2 j + a2i−1 a2 j−1 µ2i+2 j−2 ⎦
( )
n
[ ]3
) ∑
+4a0 1 − µ0 3 − µ0 a2i µ2i
( )(
i=1
⎡ ⎤
n n
n ∑ n
n ∑
)∑ ∑ ∑
−12a0 1 − µ0 a2i µ2i · ⎣ a2i a2 j µ2i+2 j + a2i−1 a2 j−1 µ2i+2 j−2 ⎦
(
n ∑
n ∑
n
[ n ]4
)∑ ) ∑
+4a0 1 − µ0 a2i a2 j a2m µ2i+2 j+2m + µ0 − 4 a2i µ2i
( (
7
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
n ∑
∑ n ∑
n ∑
n
+ a2i a2 j a2m a2 p µ2i+2 j+2m+2 p
i=1 j=1 m=1 p=1
n ∑
n ∑
n ∑
n
(20)
∑
+ a2i−1 a2 j−1 a2m−1 a2 p−1 µ2i+2 j+2m+2 p−4 .
i=1 j=1 m=1 p=1
where the first four expressions indicate the use of the iterative approach. When the method order is not the above
multiple of the probabilistic moment number m, the formula is assumed for a greater multiple by rejecting too high
order terms. However, this leads to the incompleteness of the expansions, therefore such cases were not considered
in this paper. Having the above formulas, a limit value of the output coefficient of variation was symbolically
estimated: √∫ 0
√ b +k·σ (b)
V ar ( f (b)) b0 −k·σ (b)
( f (b) − E [ f (b)])2 · p (b) db
αlim,TISPT ( f (b)) = =
E [ f (b)] E [ f (b)]
⏐ (1) ( 0 )⏐ √
( ) √ (22)
⏐f b k 2 2
− k2
⏐
∼
= 0
( ) b er f √ − ·k·e · α (b) .
f b0 2 π
Additionally, the skewness limit value can be calculated as follows:
)3
µ6n (a0 )3 µ0 1 − µ0
( ( ( ))
3,TISPT sgn f b0
βs,lim,TISPT ( f (b)) = lim (√ )3 = (√ ) = ≈ 1.0 (23)
)2 3 µ0
√
α→0
4n
(a0 ) µ0 1 − µ0
2
(
V arTISPT
have been defined here. Computer algebra system MAPLE was engaged to complete the computational implemen-
tation of all these method formulas considering their relatively complex analytical structure. The quickest way to
use them was to determine the Taylor expansion coefficients symbolically in a recursive manner.
It should be added here that this methodology is applicable also to study cross-correlations of the resulting
random fields of structural response. Let us consider for an illustration R random fields [40] denoted as br (xk ) , r =
1, 2, . . . , R, and k = 1, 2, 3, whose the first two probabilistic moments equal to
∫ +∞
E [br (xk )] = br (xk ) p (br (xk )) dbr (25)
−∞
8
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
r, s = 1, . . . , R, and i, j, k, l = 1, 2, 3.
Now, contrary to the Second Order Second Moment (SOSM) analysis [40], the first terms in the above brackets
are replaced with a general order Taylor expansion. Additionally, the corresponding expectations are not replaced
by their means but they are resulting from Eq. (9). The above equation simplified remarkably when the cross-
correlations in-between two different functions, namely f and g, related to the same uncertainty source b are
considered. There holds ∫ +∞ { }
n
εi ∂ i f (b) ⏐⏐
⏐
( 0) ∑
Cov ( f (b) , g (b)) = 0
f b + (∆b) − E [ f (b)]
i
−∞ i=1
i! ∂bi ⏐b=b0
{ n
} (29)
εi ∂ i g (b) ⏐⏐
⏐
( 0) ∑
0
× g b + (∆b) − E [g (b)] pb (x) d x
i
i=1
i! ∂bi ⏐b=b0
This correlation function may be determined for illustration according to the SOSM methodology as
∂ f (b) ⏐⏐ ∂g (b) ⏐⏐
⏐ ⏐
Cov ( f (b) , g (b)) =
∼ V ar (b) (30)
∂b ⏐b=b0 ∂b ⏐b=b0
and this formula is quite independent from the probability density function of the parameter b. There is no doubt
that correlation of two response function in the context of the iterative generalized stochastic perturbation technique
needs automatic derivation taken into account its algebraic complexity.
The response function method consists in approximating the real dependence of the structural response with
an analytical metamodel—the response function f (b). To complete this task, it is necessary to determine such a
function using multiple solutions of an investigated boundary value problem about the mean value of the random
parameter, for bi (i = 1, . . . ,N) belonging to the interval [b0 − ∆b,b0 +∆b]. As a result, we obtain a set of discrete
values with an error in the numerical procedure, e.g. the Finite Element Method.
There are several ways to choose the ∆b/b0 ratio [26,68] as well as this interval’s discretization both in terms of
uniformity and the number of trial points [32]. Here, uniform interval subdivision with N = 11 trial points and ratio
∆b/b0 = 0.05 was applied as the most frequently used. This choice was confirmed using the numerical experiments
included in [43].
Each unknown response function was approximated here using composite functions (Table 1) based on
polynomials w* of orders from 1 to 10—hereinafter called the approximation base polynomials. The responses
are then generalizations of the most frequently used empirical formulas and their coefficients were computed using
the classic Least Squares Method [71], as well as its weighted version (WLSM) to modulate the importance of
9
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Table 1
Analytical response functions under consideration.
ψ= ϕ=
√ √
x 1/x x 1/ x ln(x) exp(x)
( √ )
y1 = w (x) y2 = w (1/x) y3 = w x y4 = w 1/ x y5 = w (ln(x)) y6 = w (exp(x))
(√ )
y
( √ )
y7 = 1/w (x) y8 = 1/w (1/x) y11 = 1/w (ln(x)) y12 = 1/w (exp(x))
(√ )
1/y y9 = 1/w x y10 = 1/w 1/ x
√ √ √ (√ ) √ ( √ ) √
w (x) w (1/x) y15 = w x y16 = w 1/ x w (ln(x)) w (exp(x))
√
y2 y13 = y14 = y17 = y18 =
√ √ √ (√ ) √ ( √ ) √
= 1/ w (x) = 1/ w (1/x) = 1/ w x = 1/ w 1/ x = 1/ w (ln(x)) y24 = 1/ w (exp(x))
√
1/y 2 y19 y20 y21 y22 y23
( ( √ ))
y25 = exp (w (x))a y26 = exp (w (1/x)) = exp w x = exp w 1/ x y29 = exp (w (ln(x)))b y30 = exp (w (exp(x)))
( (√ ))
ln(y) y27 y28
( ( √ ))
y31 = ln (w (x)) y32 = ln (w (1/x)) = ln w x = ln w 1/ x y35 = ln (w (ln(x))) y36 = ln (w (exp(x)))
( (√ ))
exp(y) y33 y34
( ( √ ))
y37 = ar sinh (w (x)) y38 = ar sinh (w (1/x)) = ar sinh w x = ar sinh w 1/ x y41 = ar sinh (w (ln(x))) y42 = ar sinh (w (exp(x)))
( (√ ))
sinh(y) y39 y40
( ( √ ))
ar sinh (y) y43 = sinh (w (x)) y44 = sinh (w (1/x)) = sinh w x = sinh w 1/ x y47 = sinh (w (ln(x))) y48 = sinh (w (exp(x)))
( (√ ))
y45 y46
a Special cases: exponential functions (when the base polynomial is of the first order) as well as Gaussian functions (when the base polynomial is of the second order).
bA special case: power functions (when the base polynomial is of the first order).
computational analysis results [72]. The values computed for the expectation of random variables were recognized
as crucial in the latter method. Their weights have been assumed as the sum of the rest ten weights of equivalent
results (equal to 1) [53,68]—the Dirac-type distribution of the weights.
When considering many approximations, the selection of the final response function in the initial stage of RFM
development was made arbitrarily [33], while now it is the subject of additional optimization [68] related to curve
fitting correlation maximization, minimization of the Root Mean Squared Error (RMSE) or the residues variance.
In addition to or interchangeably with the criteria calculated between discrete data and approximation, analogous
magnitudes calculated in-between the LSM and the WLSM solutions are used [53]—weights change in calculations
does not affect how the structure behaves in reality. If the selected criteria indicate different approximations, it is
assumed that the one with the smaller coefficients number is chosen [53], which reduces the risk of the Runge
effect [73]—deterioration of the quality of polynomial interpolation, especially visible at the ends of the intervals,
despite increasing their degree—and is consistent with the principle of using the simplest possible theory called the
Occam’s razor. It is widely used, among others, to choose between statistical models with a different number of
parameters: in the AIC [74], BIC [75], SABIC [76], or RMSEA criteria [77]. All the above-mentioned conditions
are combined in the applied here responses selection criterion of the modified Root Mean Square Error (RMSE)
given by the formula:
log (P Oa)
( )
R M S E mod = − log (R M S E) · (RMSEw) · 1 −
3
,
[ ]
(31)
log (49)
where POa denotes a degree of the approximation base polynomial, RMSE is calculated between the LSM
approximation and the input discrete data, while RMSEw —between the LSM approximation and its weighted
version. Let fˆ(bi ) and f (bi ) denote respectively the obtained discrete data and the values of the considered
approximations for the given bi (i = 1, . . . ,N). Then, the following formula applies for the Root Mean Squared
Error (RMSE):
1 ∑ N
1 ∑ N ( )2
RMSE = √ (ri ) = √
2
fˆ (bi ) − f (bi ) , (32)
N i=1 N i=1
where the quantity ri is called the residuum (error or remainder). Taking the above into account, 48 groups of
functions based on polynomials of the following form have been considered:
- for the reference dependencies
y = ψ −1 (w (ϕ(x))) , (33)
- and for approximations
f (b) = ψ −1 (w ∗ (ϕ(b))) , (34)
where the argument is a random variable (x or b) with the truncated Gaussian probability density distribution.
Reference base polynomials of first, second and fifth order were finally selected for further calculations (Table 2)
10
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Table 2
The data concerning the selected reference base polynomials.
Polynomial order [POb] Polynomial form
1 w1 = 10 + x
w2 = 10 + 10−1 (−x + x 2
( )
2
w5 = 10 + 10−4 −x + x + x 3 + x 4 − x 5
2
)
5
with E[x] = x0 = 1.0 and α(x) ∈ [0.0, 0.3]. All reference responses take then (in the considered interval)
only positive values as they ultimately are to simulate performance functions describing extreme magnitudes of
structures—e.g. deflections, displacements, or stresses—concerning the given input random parameter.
Due to the use of perturbation methods, considerations about the required order 2M to obtain a satisfactory
accuracy and their convergence should be started with the Taylor expansion of the 2nth order of the real function
f (x) in the neighborhood D of the mean value of the random variable x0 :
2n
∑
f (x) = a0 + ai (∆x)i + R2n = S2n + R2n . (35)
i=1
It should be noticed that to use the three-sigma rule, the convergence of Eq. (35) is required in the following
range:
⏐x − x 0 ⏐ ≤ ∆xmax = 3αmax (x) · x 0 ,
⏐ ⏐
(36)
which occurs when the condition for the sequence of remainders R2n is satisfied:
{ }
∀ lim R2n = 0 . (37)
x 0 −∆xmax ≤x≤x 0 +∆xmax n→∞
It should be additionally emphasized that to assure the validity of the expansion (25), this function f (x) must be
(2n+1)-times differentiable. Only in cases of special dependencies, e.g. given as examples in mathematics textbooks,
with appropriate mathematical proficiency, it is possible to prove the condition given in Eq. (37). The basic problem
is the symbolic determination of the formula for any order derivative of the state function, occurring in the explicitly
expressed remainder, in the Schlömilch–Roche or some other integral form. This task is even more difficult when
we are dealing with composite functions—more complex derivative formulas—obtained based on RFM—fractional
coefficients. The use of the first of the above-mentioned forms to estimate the remainder at a selected finite n was
considered inappropriate also due to its imprecise nature.
Based on the limit definition, as a universal replacement solution for Eq. (37), it was proposed to numerically
show, for adequately large 2n from a left-hand neighborhood of 2M, denoted S− (2M), a decreasing nature of the
remainder modulus and its correspondingly small value. As a measure of the expansion error of the 2nth order, the
maximum value |R2n | = |f (x) − S2n | for the argument from the range (36) could be taken. However, taking into
account the subsequent use of the probability density function in integral formulas, it is crucial that in S− (2M) the
following value should decrease:
⏐∫ 0 ⏐ ⏐∫ 0 ⏐
⏐ x +3αx 0 ⏐ ⏐ x +3αx 0 ⏐
R2n · pb (x) d x ⏐ = ⏐ { f (x) − S2n } · pb (x) d x ⏐ , (38)
⏐ ⏐ ⏐ ⏐
⏐
⏐ x 0 −3αx 0 ⏐ ⏐ x 0 −3αx 0 ⏐
where the limits of integration were adopted, for example, as in the TISPT scheme. Further considerations are also
presented in the example of this approach.
After integration and changing the error formula from absolute to relative one, the final form of the value for
which the monotonicity was tested is as follows:
2n
⏐ ⏐
⏐ E TAM − E TISPT
⏐,
⏐
∆rel,m1 = ⏐⏐ (39)
E TAM ⏐
11
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
where E2n TISPT is the expected value calculated by the 2nth order truncated iterative stochastic perturbation method—
using Eq. (12), and ETAM is the expected value calculated by the probabilistic analytical method with truncations
according to the three-sigma rule—using Eq. (1) with modified limits. It is worth noting that the above equation
can be used both for local verification of the numerical convergence of the Taylor series and for determining the
required order of 2M m1 to obtain a satisfactory accuracy of the expected value. Analogous orders (4M m2 , 6M m3 , and
8M m4 ) in the case of the central moments from the second upwards were determined by comparing the following
relative errors:
4n
⏐ ⏐
⏐ V arTAM − V arTISPT
⏐,
⏐
∆rel,m2 = ⏐⏐ (40)
V arTAM ⏐
⏐ ⏐
⏐ µ3,TAM − µ6n ⏐
3,TISPT ⏐
∆rel,m3 = ⏐ ⏐, (41)
⏐
⏐ µ3,TAM ⏐
and
⏐ ⏐
⏐ µ4,TAM − µ8n ⏐
4,TISPT ⏐
∆rel,m4 =⏐ (42)
⏐
µ4,TAM
⏐
⏐ ⏐
to the single percent threshold with an increasing n. However, taking into account the necessity to use the expected
value to determine the quantities appearing in Eqs. (40)–(42), its relative error must decrease in some S− (2M T ),
where:
MT = max {Mm1 , Mm2 , Mm3 , Mm4 } (43)
and 2M T is meant the required Taylor expansion order.
However, it may happen that, despite the initially decreasing character, with a sufficiently large order of the
perturbation method, the value of the relative error given in Eq. (39) will start to increase. This will indicate the
lack of numerical convergence of the Taylor expansion for the considered αmax * with the simultaneous lack of
satisfactory accuracy. It is possible to determine the maximum value of the coefficient of variation for which these
conditions are met in such cases. The Taylor series converges always, but in a certain neighborhood of the point x0 ,
the width of which can be related to αmax . The first illustrative example is a function belonging to the C∞ class:
exp −1/x 2 when x ̸= 0
{ ( )
f (x) = , (44)
0 when x = 0
whose expansion into the Taylor series about x0 = 0 is equal to the function itself only at this point (due to the
zeroing of all coefficients ai ), and therefore the convergence interval is the set {0}. However, in the case of the
dependence f (x) = |x − a| (where a > 0) expanded e.g. about x0 = 2a, the limit value of the coefficient of
variation guaranteeing the ideal accuracy is 1/6—the sum of the series and the function itself are not equal for
x < a. When only the dependencies from Table 1 are concerned, such a situation occurred, for example, for the
function y26 (POb = 5), where the relative error of probabilistic moments began to increase with the order of the
TISPT method for αmax * = 0.30 (Fig. 1). The globally declining nature of errors was achieved already by reducing
αmax *—the coefficient of variation to which compliance was to be maintained—to 0.25.
Perturbation-based schemes comparison was made on the example of some function, whose convergence can be
verified directly by proving the condition included in Eq. (37). It was obtained by using the dependence y6 and
taking the base polynomial with the parameter POb = 1, which eventually led to the form f (x) = 10 + exp(x). All
the results of computational experiments have been collected in Figs. 2–3 for the first four probabilistic moments
of this function while increasing the Taylor series expansion order. It is seen here that the results obtained using
the classical linearized (SPT) and iterative (ISPT) schemes are very close to each other (Fig. 2a–b). However, it is
worth noting that ∆rel did not change with perturbation order increase. This indicates that numerical convergence
was achieved in the relative error of the method. It was satisfactory (0.3%) only in the case of the expected value.
For the subsequent probabilistic moments, ∆rel was 50%, 99%, and 69%, respectively. This is due to the lack of
12
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Fig. 1. The relative error of individual probabilistic moments ∆rel for various orders of the TISPT method (on the example of the dependence
y26 , POb = 5, αmax * = 0.30).
taking into account the truncations of the random variable. The TSPT approach completed with this element provided
greater accuracy for higher probabilistic moments (Fig. 2c), while still unsatisfactory. The errors amounted to 3.2%,
45%, and 16%, respectively, which were caused by substituting into their integral definitions for E[f (b)] only the
first term in Taylor expansion. Only the TISPT scheme, taking into account all of the above components, led to
a global gain in the accuracy of all probabilistic moments with perturbation order increase (Figs. 2d, 3 & 4). It
confirmed the conclusions contained in [51] and resulted from earlier experiments carried out by the author on a
smaller scale.
Using the TISPT scheme, the decreasing nature of the relative error was observed for almost all considered
reference functions—for the others, it was found that the Taylor series does not converge for a given αmax *. Its
intensity, however, differed depending on the group of functions and the degree of the base polynomial, which
influenced the TISPT required orders to obtain satisfactory accuracy (Tables 3 & 4). It is worth noting that regardless
of the value of POb, the smallest 2M was obtained for the following groups: 1, 13, 19, 31, 37, and 7, 25, 43, which is
the full first column of Table 1. Slightly larger required orders were obtained for the functions from the third, sixth,
fifth, fourth, and second columns, respectively. Such dependences were observed for the two largest considered
αmax * coefficients: 0.30 and 0.25. A similar regularity was not observed in the case of the rows of Table 1, which
indicates that from the perspective of the required order of the TISPT scheme, the form of the ϕ function is of
paramount importance. The lack of modification of argument x led to the fastest numerical convergence. Its slight
modifications, which did not change its growing character, made it necessary to use higher orders. Substituting for
ϕ more and more rapidly decreasing functions in the neighborhood of x0 (x−1/2 and x−1 , respectively) resulted in
obtaining the largest values of 2M.
Higher probabilistic moments required higher perturbation orders to obtain adequate relevance compared to the
truncated analytical method (Figs. 2d & 3, Table 3). Simultaneously, the most appropriate results of the TISPT
scheme were obtained using the same Taylor expansion of a given performance function substituted into the formulas
for the subsequent probabilistic moments. The fastest increase in their perturbation orders was obtained in such a
case (proportional to the power in the integral formula), and the values of higher characteristics relative errors
differed to the smallest extent. The required Taylor expansion orders 2M T to obtain numerical convergence and the
satisfactory accuracy of the results—for different αmax *—were collected in Table 4. These values for αmax * = 0.30
range from only 2 (occurring 43 times) or 4 (occurring 18 times) up to 2006 (occurring only once).
However, the relative errors—and, consecutively, the required orders of the TISPT scheme—decreased signif-
icantly assuming smaller values of the coefficient of variation, to which compliance with the truncated analytical
method solution was to be maintained (Fig. 3, Table 4). For example, the maximum values of 2M T when changing
13
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Fig. 2. Relative error of individual probabilistic moments ∆rel for the perturbation-based schemes vs. TAM (on the example of the dependence
y6 , POb = 1, αmax * = 0.30).
from αmax * = 0.30 to αmax * = 0.25 decreased from 228 to 44 for POb = 1, from 396 to 46 for POb = 2 and from
2006 to 28 for POb = 5—however, note that for the dependence y26 and y44 a larger order was required: 2M T =
38. On the other hand, the assumption of a smaller threshold of relative error resulted in a significant increase of
the 2M value, but without affecting the loss of numerical convergence.
It is worth noting that when using the same order of Taylor expansion of a given response function substituted
into the subsequent probabilistic moments’ formulas, there is a large disproportion between the obtained accuracy
of the expected value and the remaining moments under consideration (Figs. 2d & 3).
Additionally, the relative error m1 (taking into account its previous link between and the expansion convergence)
was collated with the required order 2M T (Figs. 5 & 6), which ensures the satisfactory accuracy of all the considered
characteristics. For αmax * ≤ 0.25, all the obtained results were at least two orders lower (<10E−4) than the assumed
1% error threshold (Figs. 5 & 6a). Assuming the minimum order of expansion 2M T as 10—it gives ideal results
for all polynomials of the same or lower degrees—all values of ∆rel were also greater than 10E−8. In the case of
14
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Table 3
Various orders of TISPT scheme required for satisfactory accuracy for the first four probabilistic
moments with α max * = 0.30 (a dash indicates all these cases, where numerical convergence was
not achieved for assumed α max * value; the results obtained in case of numerical determination of
the derivatives are underlined; the data bars were scaled separately for the considered POb).
15
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Table 4
Various orders of TISPT scheme requited for satisfactory accuracy with different α max * (a dash indicates all these cases, where
numerical convergence was not achieved for assumed α max * value; the results obtained in case of determining derivatives
numerically have been underlined; the data bars were scaled separately for the considered POb).
16
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Fig. 3. Relative error of individual probabilistic moments ∆rel for the TISPT scheme on the example of various reference functions (αmax *
= 0.30).
αmax * = 0.30 (Fig. 6b–c), the results differ from the previous regularity and due to the small amount of data in the
2M T upper range, it was not decided to describe them.
By limiting the αmax * values to 0.25—the considered range of variability of the random parameter still remains
wide, ranging from 0.25E[x] to 1.75E[x]—a quick way is therefore proposed to determine the order of the TISPT
approach for which the required accuracy is obtained. It is recommended to consider the value provided in Eq. (40)
with the increasing order of expansion 2n, looking for a certain 2M fin , when the value of the relative error of the
expected value will be less than 10E−8, and in some left-hand neighborhood S (2M fin ) the decreasing character
of ∆rel will be maintained—otherwise, a different response may be selected e.g. with a slightly lower value of the
RMSEmod criterion. Further usage of such a Taylor expansion order—but not less than 10—should result in an error
value below 1% for all the characteristics considered in this experiment: expected value, variance, and third and
fourth central moment. In that case, their orders will be 2M fin , 4M fin , 6M fin , and 8M fin , respectively.
17
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Fig. 4. Relative error of individual probabilistic moments ∆rel for different coefficients of variation to which compliance was to be maintained
(on the example of the dependence y2 , POb = 5).
An unquestionable disadvantage of the TISPT approach is the fact that as the order increases, the number of
digits used by the MAPLE system when performing calculations using software floating point numbers should also
be greater—150% of 2MT . This is due to the use of the error function in Eq. (13) and has the effect of increasing
the computation time. This was especially a problem when—in the case of αmax * = 0.30—the value of 2MT several
times exceeded 500 (Fig. 6c). However, with αmax * ≤ 0.25, the required orders of Taylor expansion did not exceed
50, and the calculation time was then a maximum of several minutes—clearly shorter than in the truncated analytical
method. These differences are additionally due to the rapidly growing number of terms in the TISPT approach
formulas (Fig. 7), which in the case of the fourth central moment increases about 10,000 times—from over 50
million at 2n = 50 to over 500 billion at 2n = 500.
If there is no need to truncate a random variable according to the three-sigma rule (k = 3), a much faster
solution is to use the ISPT approach. It does not require such high accuracy of calculations in MAPLE as TISPT,
and additionally, the number of terms in the formulas for the output central moments decreases significantly—as a
result of zeroing the term 1 − µ0 . The ISPT approach should have a convergence not worse than the TISPT method
18
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Fig. 5. Relative error of the expected value (m1) versus the required Taylor expansion order of the examined functions.
due to its special case: k → ∞. Therefore, the TISPT approach was finally adopted in further calculations to be
able to present a method comparison on the practical engineering example.
4. Numerical illustrations
Uncertainty analysis with two different probabilistic methods: The Truncated Iterative Stochastic Perturbation
Technique (TISPT) and the truncated (semi-)analytical method (TAM or TSAM) was carried out for some practical
engineering examples from [62]. The TISPT approach was implemented based on the Taylor expansion of the order
of 2M fin . Computational implementation was completed thanks to the interoperability of two computer systems—
computer algebra software MAPLE and the Finite Element Method system ROBOT or ABAQUS Standard.
Composite response functions based on polynomials were determined using specific series of the FEM experiments
and the RMSEmod criterion. Further Taylor expansions of these functions resulted in probabilistic characteristics
of structural response computed with the TISPT, while the integration of the responses with the density kernel
returned the same characteristics according to the T(S)AM approach; each time one compares the expected values,
coefficients of variation, skewness, and also kurtosis, as the functions of the input uncertainty level. The third case
study presents them additionally as a function of the incrementing procedure progress. A collection of numerical
illustrations presented below extends a variety of the examples studied in the initial work with the iterative stochastic
perturbation technique for the truncated Gaussian uncertainty [51]:
– The first example contains initial verification and has rather educational motivation as this simply supported
beam under uniform bending is widely used to illustrate the basis of the strength of materials and civil
engineering structures design.
19
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Fig. 6. Relative error of the expected value (m1) versus the required Taylor expansion order of the examined functions (for different αmax *).
– The second numerical illustration concerns the modal and second-order static analysis of eight different steel
diagrid grillages and it was motivated by some practical civil engineering applications and the necessity of
the reliability assessment.
– The third case study devoted to the uniformly extended homogeneous bar with a round cross-section includes
computer simulation of the basic strength test and shows stochastic analysis at the beginning of irreversible
elastoplastic deformations.
– The fourth numerical illustration concerns an application of the homogenization theory to composite materials
with incompressible particles and shows an application of the TISPT in large-scale computations.
4.1. Example 1: Simply supported beam under uniform load in the linear elastic regime
The first numerical example is devoted to the simply supported single-bay beam structure subjected to bending
by an external uniformly distributed constant load applied throughout its length l (Fig. 8)—the given input random
variable randomized according to the truncated Gaussian probability distribution function. This choice was directly
20
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Fig. 7. The number of total terms in the formulas of the TISPT approach as a function of its order.
Fig. 9. Expected values [cm] (left) & coefficients of variation (right) of the extreme vertical deflection of the simply supported elastic beam.
motivated by geometrical imperfections mandatory in civil engineering designing codes and having the analytical
solution for the extreme displacement in this beam. The following parameters were adopted in this analysis: inertia
modulus J = 0.0001 m4 , uniform external load q = 10 kN/m, elastic modulus e = 210 GPa, and expectation of
the beam length E[l] = 10.0 m. Each time the expected value, coefficient of variation, skewness as well as kurtosis
of the maximum displacement, fundamental for verification of the Serviceability Limit State (SLS) are observed
(u = 5/384 ql4 /eJ). Simple calculation of the deterministic counterpart proceeds from the classical Euler–Bernoulli
beam theory and tangent forces’ contribution to the vertical deformation is postponed. Probabilistic output in this
case has been collected in Figs. 9–11 and all these results were plotted as the functions of the input coefficient of
variation belonging to the interval α(l) ∈ [0.0, 0.25].
21
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Fig. 10. Skewness of the extreme vertical deflection of the elastic beam.
Fig. 11. Kurtosis of the extreme vertical deflection of the elastic beam.
The first numerical example results showed the satisfactory accuracy of the values of the probabilistic charac-
teristics for 2M fin = 10. The actual relative error was much smaller than the assumed (1%) and for the subsequent
moments it was a maximum 1.94E−16, 4.61E−16, 4.37E−14, and 2.77E−15, in turn. In the case of graphical
presentation, this translated into a visually identical course (Figs. 9–11) of continuous TISPT dependence (lines)
and TSAM discrete data (symbols). It is worth emphasizing, however, that the first technique made it possible to
accurately observe local, rapid changes in the values of the characteristics, especially for small α(l), which in the case
of the second method is significantly limited by their discretization (Figs. 10a & 11a). Only for the fifty times smaller
increment ∆α, does the number of discrete values seems to be enough to properly demonstrate a holistic dependence
in the interval under consideration (Figs. 10b & 11b). The overall computational effort (including plotting) for
TISPT was equivalent to 39.99 MB and 0.97 s for the entire stochastic perturbation-based solution discussed in this
example—with 20 numbers precision. The TSAM costs then 39.68 MB and 1.35 s One can conclude that another
advantage of the TISPT approach was the reduction of the total computation time. This example was performed
22
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
using MAPLE 13 (32 bits version) installation in the operating system Windows 8 64 bits on a computer equipped
with an i7 4710 HQ 4 cores processor.
Analyzing probabilistic characteristics themselves, an increase in the expected values together with α(l) is
observed (Fig. 9a), because the response function is convex in the considered interval. The values of the resulting
coefficient of variation (Fig. 9b) increase faster than the input α(l). Simultaneously, the relationship is almost linear
in the entire considered range of variability, where the slope is close to 4.0. Thus, the variability of the output
parameter is larger than that of the input parameter. Simultaneously, the proportionality coefficient each time agrees
with the symbolic estimate obtained here in Eq. (22). The skewness always exhibits the approximately value of 1.0
with α(l)→0, which then increases to approximately 1.4, reaching a local maximum (Fig. 10). In the further parts
of the graph, there is a positive local minimum and a global increase, which proves the right-hand asymmetry of
the state function distribution. The sign of the final value of skewness is justified by the convexity of the response
function, while the initial βs agrees with the symbolic estimate computed here. Kurtosis exhibits a global increase
in its value with the increasing input coefficient of variation (Fig. 11). It starts abruptly from around −2.0, and
then around 0 the graph levels out on a certain section to further increase to positive values. Thus, when α(l) is
increased, the concentration of the state function value increases from less than in the normal distribution, through
similar up to the greater; a correctness of the initial value of kurtosis is confirmed by the symbolic estimate given
in Eq. (24).
Eight grillages designed by the applicable standards for steel structures were analyzed in this experiment. They
were computational models of a supporting steel structure for a rectangular glass floor (9.00 × 5.20 m). It belongs
to the category of use C1 [78], including areas with tables where people may congregate, e.g., cafés or restaurants. It
was additionally assumed that due to the visual (architect or investor) and design requirements (easier connections,
as well as the internal forces transfer), the dimension of the cross-section in the normal direction must be constant
within each model.
The following random variables were assumed as truncated Gaussian parameters: the multiplier of (1) Young’s
modulus e, (2) wall thickness t, and (3) length of the manufactured elements l. They represent the variability
related to the material, cross-sections, and geometry, correspondingly. The value of their coefficients of variation
was considered in the interval [0.0, 0.25]. All the expected values were 1.0 since e, t, and l are multipliers of
deterministic quantities. The entire numerical example concerning steel diagrid grillages is shown schematically in
Fig. 12.
The FEM analysis was carried out with the use of the civil engineering system ROBOT. Its usage was driven by
the fact that this system enables both obtaining discrete solutions to a given structural problem as well as shaping
and dimensioning of steel structures so that the analysis is fully supported by this program. Two types of grillages
were considered: orthogonal (models OB and OS) and diagrid. The latter were additionally grouped into having a
mesh of right triangles (models RB and RS) and equilateral triangles arranged in the transverse (models TB and
TS) as well as longitudinal directions (models LB and LS) to represent different possible architectural concepts of
floor triangularization. In each of these groups, there were versions with a smaller and bigger mesh size (Fig. 13),
with uniform division, so that the nodes of the real structure are as repeatable as possible.
All of the supports were defined as rigid with fixed movement and free rotation in each direction. Table 5 shows
the properties of the models in detail. There, it can be seen how the reduction of the mesh size directly influences
the increase in the number of nodes, supports, structural members, and panels. However, a thinner layer of laminated
glass glazing is required. In this case, it consists of two hardened load-bearing panels and an 8 mm thick anti-slip
top layer—also hardened. All the loads from the Eurocodes in persistent design situation were considered for the
design working life of 50 years—a common structure according to [78]. The first (marked with G) is connected
with the weight of the glass covering and the supporting structure itself. The second (marked with Q) represents the
imposed live loads of 3.0 kN/m2 . The surface loads were implemented. They were distributed using the trapezoidal
and triangular methods.
The 3D frame two-noded finite elements with six degrees of freedom in each node and rigid connection were
used in each model and the incremental formulation of the FEM equations [79–81] was employed. The global
deformation was computed in the Serviceability Limit State (SLS) combination (G + Q), and the stresses—in the
23
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Fig. 12. Scheme of the numerical example concerning steel diagrid grillages.
Fig. 13. Grillage structures under consideration: (a) model OB, (b) model OS, (c) model RB, (d) model RS, (e) model TB, (f) model TS,
(g) model LB, (h) model LS.
24
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Table 5
Geometrical parameters & boundary conditions in different grillage models.
Model Nodal points Supports Bars Panels Basic panel Glass pane Steel
number number number number size [m] thicknesses [mm] weight [kg]
OB 24 16 38 15 3.00 × 3.12 2 × 19 + 8 7325
(rectangle sides)
OS 40 22 67 28 2.25 × 2.23 2 × 12 + 8 8557
(rectangle sides)
RB 24 16 53 30 3.00 × 3.12 2 × 10 + 8 9983
(legs)
RS 40 22 95 56 2.25 × 2.23 2 × 8 + 8 11,388
(legs)
TB 32 20 73 42 3.00 (equilateral 2 × 10 + 8 10,090
triangle side)
TS 50 26 121 72 2.25 (equilateral 2 × 8 + 8 11,789
triangle side)
LB 24 16 53 30 3.46 (equilateral 2 × 10 + 8 9582
triangle side)
LS 38 22 89 52 2.60 (equilateral 2 × 8 + 8 10,841
triangle side)
Ultimate Limit State (ULS) combination (1.15 × 0.85G + 1.5Q) obtained following the formulas contained in [78].
The second-order global analysis P−δ was used taking into account the influence of deformation on the statics of
the system—geometric nonlinearity—to more realistically predict the structural behavior. The incremental method
was applied using a modified Newton–Raphson algorithm to solve the nonlinear problem. The stiffness matrix was
updated only after each subdivision—not after each iteration—as a result of its correction with the algorithm of
the BFGS method. The following parameters were set to complete these experiments: (i) the number of the load
increments equals 5, (ii) the maximum number of iterations per increment is 40, (iii) the number of reductions for
the increment length equals 3, (iv) coefficient of the increment length reduction is taken as 0.5, (v) the maximum
number of BFGS corrections equals 10, (vi) tolerance factor of the relative norm for the residual forces are adopted
as 0.0001, and also (vii) tolerance factor for the relative norm of displacements equals 0.0001.
The Finite Element Method-based modal analysis was performed, too. Higher modes were also computed, but
only the first mode was verified. The subspace iteration algorithm was used for solving the eigenvalue problem.
The maximum number of iterations was set as 40 with a tolerance factor of 0.0001.
Yield strength was assumed as 235 MPa and the structural members were made of rectangular hollow sections
(Fig. 13). This type of cross-section was chosen because of its insensitivity concerning the lateral–torsional buckling,
the creation by their top walls of a flat surface to support the panels; hence, it is usually used in structures
with glass covers [82]. This choice made it possible to avoid considering the warping in the FEM analysis as
an additional degree of freedom in the node—the seventh in this case—because its influence is negligible for the
selected cross-sections.
The sizing process was similar and deterministic in each model, carried out entirely according to Eurocode 3.
Initially, an identical RHS section of a given dimension in the normal direction was adopted for all the bars. Next, a
division into groups of bars was made, taking into account the distribution of internal forces in the ULS combinations
and deformations in the SLS combinations. A cross-section with a given dimension in the normal direction was
then calculated for each of them. The process of creating groups and designing their cross-sections was iterative—to
have no more than three profiles (for a given grillage model—economic reasons) with their effort not exceeding
90%. The above procedure was carried out for various constant dimensions in the normal direction—e.g., 250, 260,
300, 350, 400 mm—finally choosing the one with the lowest mass. The final profiles are shown in Fig. 13, and the
masses—calculated based on the lengths of the model bars, not the real structural elements—are in Table 5. The
minimum steel weight was found for grillages with the orthogonal mesh (models OB & OS), and the maximum—for
the triangular mesh in the transverse direction (models TB & TS).
The most demanding deterministic limit criterion in all models was the SLS condition connected with the global
deformation of structures. It was met for the OB, OS, RB, RS, TB, TS, LB, and LS models at 86.9%, 87.3%,
83.0%, 85.3%, 88.4%, 87.2%, 81.5%, and 86.9%, correspondingly. The maximum efficiency ratio of the structural
25
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Table 6
Numerical data concerning response functions for different grillage structures.
Random Performance Mean of the Approximation base Approximation Number of response Required order
variable function RMSEmod polynomial degree group formula function group 2Mfin
ug 38.74 1 y2 = w (1/x) 1 40
e ul 37.13 1 √ (x)
y7 = 1/w 2 40
ω 39.10 1 y13 = w (x) 3 22
√
ug 25.62 4 y14 = w (1/x) 4 42
√
ul 28.02 2 y14 = w (1/x) 5 40
t
ω 25.66 4 y11 = 1/w (ln(x)) 6 24
√
σred 20.16 2 y14 = w (1/x) 7 40
ug 22.52 3 y29 = exp (w (ln(x))) 8 10
ul 25.58 3 y29 = exp (w (ln(x))) 9 10
l
ω 37.42 1 y29 = exp (w (ln(x))) 10 54
σred 23.11 3 y1 = w (x) 11 10
members in the ULS was simultaneously remarkably lower, which means in turn 49.1%, 43.2%, 57.4%, 43.2%,
41.8%, 49.2%, 48.1%, and 40.0%.
The final purpose of the Finite Element Method experiments was to obtain discrete solutions to the given
structural problems for all the grillages. Therefore, the resulting values of the fundamental frequency, the extreme
reduced stresses (according to the Huber–Mises–Hencky hypothesis), and the maximum values of the global vertical
displacement and local deflection were determined. For all FEM data series, the response functions were selected
according to the RMSEmod criterion and for each of them the required Taylor expansion order 2M fin was determined.
The results were collected in Table 6. The expected value, coefficient of variation, skewness, and kurtosis were
obtained for all analyzed variables—depending on their input coefficient of variation—and the state function.
This numerical example results showed the satisfactory accuracy of the values of the probabilistic characteristics.
The actual relative error was much smaller than the assumed (1%) and for the subsequent moments it was a
maximum 8.72E−9, 8.91E−7, 6.97E−6, and 8.36E−6, in turn. In the case of graphical presentation, this translated
into a visually identical course (Figs. 14–19) of continuous TISPT dependence (lines) and TSAM discrete data
(symbols). Again, the first technique made it possible to accurately observe local, rapid changes in the values of the
characteristics, especially for small α(b) (Figs. 18 & 19), which in the case of the second method is significantly
limited by their discretization. Another advantage of the TISPT approach was the reduction of the total computation
time in this example.
Focusing on the results of probabilistic characteristics for individual models, the differences occur mainly in
the case of the expected values (Fig. 14), where they are mainly caused by other deterministic efficiency ratios.
Nevertheless, the graphic illustration of the vast majority of higher-order characteristics is visually identical for the
considered grillages (Figs. 15, 17, 18a–b & d, 19a–b & d). This is due to their similar structure, which results in
a similar sensitivity of the state function to the input coefficient of variation. The exceptions are those obtained
when considering the wall thickness variability (Figs. 16, 18c, 19c), which is explained by the non-uniformity of its
distribution in individual models, and thus a more complex form of the response function—compared to the rest.
Analyzing probabilistic characteristics themselves, an increase in the expected values with α(b) is observed
(Fig. 14a–b, d) when the response function is convex in the considered interval, while a decrease (Fig. 14c)—when
it is concave. When discussing higher characteristics, it is worth noting that in the case of the response function
groups (Table 6) numbered 1, 2, 4, 5, 7, the coefficient of variance (Figs. 15a, 16a–b, d), the state function skewness
and kurtosis have very similar graphic presentations. A similar situation occurs for groups 3 and 6 (Figs. 15b and
16c) as well as 8 and 9 (Fig. 17a–b). This is explained by the similarity of the response function course or the used
function group formula for approximation (Table 6). Simultaneously, the results do not seem to be sensitive to the
values of individual coefficients of the base polynomials. They are partially reduced due to the relativity of higher
characteristics, Eqs. (4)–(6), so that the distributions of the probability density function of a given limit function do
not differ much in shape, but only in their location along the abscissa axis.
Numerical values of the resulting coefficient of variance for the response function groups 1, 2, 4, 5, and 7 increase
only slightly faster than the input α(b). Simultaneously, the linear relationship is maintained in the range [0.0, 0.15]
26
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
(Figs. 16a, 17a–b, d). A similar situation occurs for group 10, with proportionality being maintained at α(b) <
0.10 (Fig. 17c). In other cases, the relationship is linear in the entire considered range of variability, whereas: for
groups 3 and 6 the slope is approximately less than 0.5 (Figs. 15b and 16c), for groups 8 and 9—larger than 4.5
(Fig. 17a–b), and for group 11—greater than 2.5 (Fig. 17d). Thus, in most groups of functions (9 out of 11), the
variability of the output parameter is greater than or close to the input parameter.
The skewness always exhibits the approximately value of 1.0 with α(b)→0, which then increases to approxi-
mately 1.4, reaching a local maximum—with a coefficient of variance in the range from 5.61E−4 to 7.42E−3. In
the further parts of the graphs, in most cases there is a positive local minimum—for α(b) ranging from 0.013 to
0.039—and a global increase (Fig. 18a–b,d), which proves the right-hand asymmetry of state function distributions.
The exception is the response function groups with numbers 3 and 6, where after the local maximum there is
a decrease to negative values (Fig. 18c). The sign of the final value of skewness is justified in the convexity of
the response function. A global increase in kurtosis value is observed while increasing the input coefficient of
variation. It always starts abruptly from around −2.0 and then around 0 (for α(b) from 2.43E−4 to 3.27E−3) the
graph levels out on a certain section to further increase to positive values (Fig. 19). Thus, when α(b) is increased,
27
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Fig. 15. Coefficients of variation of grillages state functions with Young’s modulus as input random variable.
the concentration of the state function value increases from less than in the normal distribution, through similar up
to the greater.
The third probabilistic numerical analysis concerns uniform uniaxial extension of the cylindrical steel bar, which
has random Young’s modulus e of the truncated Gaussian probability distribution function. Elastoplastic FEM
experiments were carried out with the use of 5901 rectangular finite elements for the axisymmetric stress analysis
with the reduced integration option (CAX4RH: 4-node bilinear axisymmetric quadrilateral, hybrid, constant pressure,
reduced integration, hourglass control) in the system ABAQUS. This FEM test was performed using ABAQUS
6.13–1 installation in the operating system Windows 7 64 bits on a computer equipped with an i7 4700 MQ 4 cores
processor. The FEM analysis was completed with 69 increments, 140 iterations, and 3402 equations, which lasted
12,40 s of the total CPU time. An initial increment was set as 0.001, the minimum increment was equal to 10E−7,
while the extreme—0.01. Extending force was fixed as 200.0 N, a specimen has a length equal to 140.0 mm, a
diameter equal to 12.0 mm, the yield stress has been fixed as fy = 235 MPa, while the expected value of Young’s
modulus and Poisson ratio were defined as E[e] = 210 GPa and ν = 0.3, accordingly. The input coefficient of
variation was considered in the interval α(e) ∈ [0.0, 0.25] (Young’s modulus e ∈ [157.5; 262.5]). Only a quarter
of the steel cylindrical bar was modeled due to axial symmetry of the computational domain. Fig. 20 presents this
solid structural element with imposed boundary conditions—the left vertical edge is subjected to the tensile stress,
the lower edge is the symmetry axis with zero vertical displacements, while horizontal displacements were fixed
along the right edge. A small cut-out of one corner was provided to ensure the expected necking in the middle of
the bar (at the vertical symmetry axis).
The basic probabilistic characteristics were computed after each 10% of the analysis progress for the composite
structural responses of the dependence y7 with the approximation base polynomials of degree POa = 1, which is
the most optimal for the entire deformation process according to the RMSEmod criterion. The extreme displacement
is inversely proportional to Young’s modulus so that an infinite number of partial derivatives of u to e exist and a
convergence—or divergence—of the Taylor expansion inherent in TISPT may be verified again. The numerical
example results showed the satisfactory accuracy of the values of the probabilistic characteristics for 2M fin =
40. The actual relative error was much smaller than the assumed (1%) and for the subsequent moments it was a
maximum 9.58E−9, 2.84E−7, 2.09E−6, and 4.94E−6, in turn. In the case of graphical presentation, this translated
into a visually identical course (Figs. 21 and 22) of continuous TISPT dependence (lines) and TSAM discrete
data (symbols). Their coincidence is especially desired and promised because it suggests the applicability of the
28
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Fig. 16. Coefficients of variation of grillages state functions with wall thickness as input random variable.
perturbation methods family for large deformations also. Again, the first technique made it possible to accurately
observe local, rapid changes in the values of probabilistic characteristics, especially for small α(e) (Fig. 22), which
in the case of the second method is significantly limited by their discretization. The overall computational effort
(including plotting) for TISPT was equivalent to 47.86 MB and 4.90 s for the entire stochastic perturbation-based
solution discussed in this example—with 20 discrete values of α(e) to be generated. The TSAM costed then 47.92
MB and 10.30 s. For the same accuracy, using the TISPT approach again led to the reduction of the total computation
time. This example was performed using 32 bits version of the system MAPLE 13 in the operating system Windows
8 64 bits on a computer equipped with an i7 4710 HQ 4-core processor.
Focusing on the results of probabilistic characteristics for each 10% of the analysis progress, the differences
occur only in the case of the expected values (Fig. 21a), where they are mainly caused by other deterministic
deflections. On the contrary, the graphic illustration of the higher characteristics is visually identical throughout
the incremental analysis (Figs. 21–22). This is explained by the used function group formula for approximation
(Table 6). Simultaneously, the results do not seem to be sensitive to the values of individual coefficients of the
base polynomials. They are partially reduced due to the relativity of higher characteristics, Eqs. (4)–(6), so that
29
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Fig. 17. Coefficients of variation of grillages state functions with length of the manufactured elements as input random variable.
the distributions of the probability density function of the extreme vertical deformation throughout the incremental
analysis do not differ much in shape, but only in their location along the abscissa axis.
Analyzing the characteristics themselves, an increase in the expected values with α(e) is observed (Fig. 21a)
because the response function is convex in the considered interval. The values of the resulting coefficient of
variance increase only slightly faster than the input α(e). Simultaneously, the linear relationship is maintained in
the range [0.0, 0.10] (Fig. 21b). Thus, the variability of the output parameter is a little bit larger than or close to the
input parameter. The skewness always exhibits the approximately value of 1.0 with α(e)→0, which then increases,
reaching a local maximum (Fig. 22a). In the further parts of the graph, there is a positive local minimum and a
global increase, which proves the right-hand asymmetry of the state function distribution. The sign of the final value
of skewness is justified in the convexity of the response function, while kurtosis exhibits a global increase in its
value, which is observed with the increasing input coefficient of variation (Fig. 22b). It starts abruptly from around
−2.0 and then around 0 the graph levels out on a certain section to further increase to positive values. Thus, when
α(e) is increased, the concentration of the state function value increases from less than in the normal distribution,
through similar up to the greater.
30
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
The next computational example was devoted to the determination of the apparent (homogenized) elasticity tensor
components in the linear elastic regime and was prepared using the Statistical Volume Element (SVE) [83] of the
particulate composite discretized in the FEM system ABAQUS (Fig. 23); the RVE is cubic and contains centrally
located spherical particle. This model consists of 96 999 nodal points and 22 048 C3D20RH hexahedral finite
elements. A homogenization of this composite is based upon the equity of deformation energies of the heterogeneous
medium and its homogeneous counterpart under uniform uniaxial, biaxial as well as shear stretches on the external
surfaces of this RVE. The extension ∆ is some given displacement, i.e. unitary, while biaxial as well as shear
deformation states are imposed on this RVE in a very similar way.
This problem was numerically solved on a single processor machine equipped with an i5 4670k processor and
lasted 6 min & 43,3 s. Young’s modulus of the rubber particle was adopted as e(p) = 1.0 MPa and the Poisson
ratio is equal to ν (p) = 0.489 (it is almost an incompressible material), while the same parameters for the polymeric
matrix are equal to e(m) = 4.0 GPa and ν (m) = 0.34, correspondingly. The basic input random variable scattered
statistically according to the truncated Gaussian probability distribution is this particle radius, whose expected value
31
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
equals E[r] = 0.25 µm (effectively a particle volume fraction is 6.5%), while its coefficient of variation was
considered in the interval α(r) ∈ [0.00, 0.25]; this is equivalent to r ∈ [0.0625, 0.4375] µm. The huge sensitivity
of the apparent characteristics tensor to this particle size was a motivation for this study. The second motivation
(hom) (hom)
was the fact that the function C1111 = C1111 (r) is an implicit analytical function of r and must be recovered
numerically. The homogenization method was implemented here by equating the deformation energies of the real
and the equivalent homogeneous RVE under the same boundary conditions of its uniform uniaxial and biaxial
extensions [68]. Eleven series of the homogenization problems were solved to create the response functions in-
between the apparent tensor components and the particle radius—all with deterministically modified particle radius
(plus additional automatic re-meshing with the same parameters), whose values are taken from the following set
r ∈ {0.2375, 0.24, 0.2425, 0.245, 0.2475, 0.25, 0.2525, 0.255, 0.2575, 0.26, 0.2625} µm. The composite structural
responses of the dependence y48 with the approximation base polynomials of degree POa = 2 were determined—
as most optimal for all considered tensor components according to the RMSEmod criterion. The numerical example
results showed the satisfactory accuracy of the values of the probabilistic characteristics for 2M fin = 16. The actual
relative error was much smaller than the assumed (1%) and for the subsequent moments it was a maximum 3.50E−9,
32
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Fig. 20. A quarter of the extended steel cylindrical bar with discretization and boundary conditions.
Fig. 21. Expected values [mm] (left) and CoVs (right) for the extreme vertical deformation of the bar.
Fig. 22. Skewness (left) and kurtosis (right) of the extreme vertical deformation of the extended bar.
33
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Fig. 24. Expected values [GPa] (left) and CoVs (right) of the effective elasticity tensor components.
1.66E−7, 1.26E−6, and 1.61E−6, in turn. In the case of graphical presentation, this translated again into a visually
identical course (Figs. 24–25) of continuous TISPT dependence (lines) and TSAM discrete data (symbols). Their
coincidence is especially desired and promised because it suggests the applicability of the perturbation methods
family for homogenization problems also. Again, the first technique made it possible to accurately observe local,
rapid changes in the values of the characteristics, especially for smaller values of the parameter α(r), which in the
case of the second method is significantly limited by their discretization. The overall computational effort (including
plotting) for TISPT was equivalent to 50.38 MB and 3.83 s for the entire stochastic perturbation-based solution
discussed in this example—with 20 numbers precision. The TSAM costed then 39.63 MB and 6.17 s. For the
same accuracy, using the TISPT approach again led to the reduction of the total computation time. However, the
computer memory usage was greater for the former method. This example was performed using MAPLE 13 (32-bit
version) installation in the operating system Windows 8 64 bits on a computer equipped with an i7 4710 HQ 4
cores processor.
34
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
Fig. 25. Skewness (left) and kurtosis (right) of the effective elasticity tensor components.
Focusing on the results of probabilistic characteristics for individual tensor components, the differences occur
mainly in the case of the expected values (Fig. 24a), where they are mainly caused by other deterministic values.
Nevertheless, the graphic illustration of the higher characteristics is similar. This is due to the same mathematical
structure of responses, which results in a similar sensitivity of the state function to the input coefficient of variation.
The exceptions are the coefficients of variation (Fig. 24b), which is explained by the non-uniformity of the particle
radius influence on individual tensor components, and thus a more complex form of the response function—
compared to the previous experiments. This time, the results seem to be sensitive to the values of individual
coefficients of the base polynomials.
Analyzing the characteristics themselves, a nonlinear decrease in the expected values with α(r) is observed,
because the response function is concave in the considered interval. Let us note that real fluctuations are a maximum
of about 2%, so they are hardly observed in the joint graph. The values of the resulting coefficient of variance
increase slower than the input α(r). Simultaneously, the relationship is almost linear in the entire considered range
(hom) (hom) (hom)
of variability, where the slope is close to 0.32 for C1212 , 0.48 for C1111 , and 0.64 for C1122 (Fig. 24b). Thus, their
proportion is 1:1.5:2, correspondingly. Nevertheless, the variability of the output parameter is smaller than the input
one. The skewness always exhibits the approximately value of 1.0 with α(r)→0, which then increases, reaching a
local maximum (Fig. 25a). In the further parts of the graph, there is a decrease to negative values, which proves
the left-hand asymmetry of the state function distribution. The sign of the final value of skewness is justified in the
concavity of the response function, while the initial βs agrees with the symbolic estimate computed here. Kurtosis
exhibits a global increase in its value with the increasing input coefficient of variation (Fig. 25b). It starts from
around −2.0, and then around 0 the graph levels out on a certain section to further increase to positive values.
Thus, when α(r) is increased, the concentration of the state function value increases from less than in the normal
distribution, through similar up to the greater.
5. Concluding remarks
An algorithm for determining probabilistic characteristics for the truncated Gaussian variables with symmetric
integration limits around the mean value has been developed in this work. This has been implemented using
the generalized iterative stochastic fourth-moment perturbation method. The approach proposed enables to derive
analytical formulas for the expected values, coefficient of variation, higher-order moments as well as skewness
and kurtosis including composite response functions based on polynomials and their derivatives. Additionally, the
limit values of some resulting characteristics have been symbolically estimated for the TISPT scheme. Numerical
examples attached demonstrate very high accuracy while comparing with (semi-)analytical probabilistic technique
35
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
in some linear and non-linear solid mechanics problems solved with the composite response functions based on
polynomials.
The most important research finding of this work is that the few proposed perturbation-based approaches and their
orders have a meaningful influence on numerical convergence and accuracy of up to the fourth-order probabilistic
characteristics, where all truncations have been introduced according to the three-sigma rule. The Truncated Iterative
Stochastic Perturbation Technique turned out to be the most stable approach giving a global gain in the accuracy of
the results with perturbation order increase. It has been directly confirmed by the computations that the same Taylor
expansion of a given performance function (having a specific order) should be substituted into the formulas for the
subsequent probabilistic characteristics; most of them require a remarkably higher perturbation order to obtain the
required accuracy, and such a solution causes their fastest increase.
A way for selecting the TISPT scheme’s order leading to the assumed accuracy of probabilistic characteristics
has been proposed, and it has been positively verified in the few practical engineering examples. The form of the
inner function is of paramount importance from the perspective of the required order of the TISPT scheme; the
lack of modification in argument x leads to the fastest numerical convergence; its slight modifications make usage
of higher-order terms necessary. Substituting for the inner function more and more rapidly decreasing functions
in the neighborhood of the mean value x0 demands higher required orders in the TISPT scheme. The assumption
of a smaller threshold of relative error results in a significant increase of this value, but without affecting the loss
of numerical convergence. On the other hand, the relative errors decrease remarkably assuming smaller values of
the input coefficient of variation, to which compliance with the truncated analytical method solution was to be
maintained. All of this occurs under the condition that the response function is selected to ensure convergence in
the given interval of the input coefficient of variation—[0.0, αmax ].
The Truncated (Iterative) Stochastic Finite Element Method presented in this paper is an efficient tool that
can be an alternative to the (semi-)analytical probabilistic approach. The first method always leads to analytical
equations for the basic probabilistic characteristics of the structural response, but the second approach may be
inapplicable in case of some non-Gaussian PDFs, where probability integrals cannot be derived. This is also because
it allows obtaining any assumed accuracy together with a continuous dependence of probabilistic characteristics—or
reliability indicators—for different performance functions as a function of input coefficient of variation instead of
a limited number of discrete data. It is particularly desirable in the context of the structure reliability comparison.
Additionally, it makes it possible to accurately observe local, rapid changes in the values of the characteristics,
which in the case of the discrete stochastic methods, e.g. Monte-Carlo simulations, is significantly limited. Another
advantage of the TISPT approach was the reduction of the total computation time in the examples.
This study may be further extended towards an analysis of some random variables exhibiting Gaussian probability
density functions with non-symmetric (lower or upper, finite or no) truncations around the mean value—Young’s
modulus or geometric parameters usually exhibit Gaussian character but limited to the non-negative values only—as
well as non-Gaussian probability distributions e.g. log-normal, Gumbel, Weibull. There, both even and odd order
terms must be included in all probabilistic characteristics formulas. It should be noticed that the Gaussian distribution
assumption for the random input variable for most material parameters in engineering design is not justified by the
experimental statistics. The approach proposed may be successfully extended toward probabilistic entropies and
probabilistic relative entropies in computational solid mechanics [84]. A very important direction of further work
is also the theoretical development and then computer implementation of problems for functions of many random
variables, when their cross-correlations are of paramount importance for their joint consideration.
The authors declare the following financial interests/personal relationships which may be considered as potential
competing interests: Kaminski Marcin reports financial support was provided by National Science Centre Poland.
Kaminski Marcin reports a relationship with National Science Centre Poland that includes funding grants.
Data availability
Acknowledgments
This paper has been written in the framework of the research grant OPUS no. 2021/41/B/ST8/02432 “Probabilistic
entropy in engineering computations” sponsored by the National Science Center in Cracow, Poland.
References
[1] J. Burkardt, The Truncated Normal Distribution, Vol. 1, Dept. of Sci. Comput. Website, Florida State University, 2014, p. 35, http://p
eople.sc.fsu.edu/∼jburkardt/presentations/truncated_normal.pdf.
[2] S. Zein, D. Dumas, A truncated Gaussian random field method for modelling the porosity defect in composite structures, Compos.:
Mech. Comput. Appl.: Int. J. 14 (1) (2023) 41–56.
[3] M. Zheng, Y. Ohta, Bayesian positive system identification: truncated Gaussian prior and hyperparameter estimation, Syst. Control
Lett. 148 (2021) 104857.
[4] B. Huang, Z. He, H. Zhang, Hybrid perturbation-Galerkin method for geometrical nonlinear analysis of truss structures with random
parameters, Chin. J. Theor. Appl. Mech. 51 (5) (2019) 1424–1436.
[5] F. Wu, Q. Gao, X. Xu, W.X. Zhong, A modified computational scheme for the stochastic perturbation finite element method, Latin J.
Solids Struct. 12 (2015) 2480–2505.
[6] F. Wu, L.Y. Yao, M. Hu, Z.C. He, A stochastic perturbation edge-based smoothed finite element method for the analysis of uncertain
structural-acoustics problems with random variables, Eng. Anal. Bound. Elem. 80 (2017) 116–126.
[7] B. Xia, D. Yu, J. Liu, Transformed perturbation stochastic finite element method for static response analysis of stochastic structures,
Finite Elem. Anal. Des. 79 (2014) 9–21.
[8] O. Cavdar, A. Bayraktar, A. Cavdar, S. Adamur, Perturbation based stochastic finite element analysis of the structural systems with
composite sections under earthquake forces, Steel Compos. Struct. 8 (2) (2008) 129–144.
[9] X.B. Hu, X.Y. Cui, H. Feng, G.Y. Li, Stochastic analysis using the generalized perturbation stable node-based smoothed finite element
method, Eng. Anal. Bound. Elem. 70 (2016) 40–55.
[10] F. Yamazaki, M. Shinozuka, S. Dasgupta, Neumann expansion for stochastic finite element analysis, J. Eng. Mech. 114 (8) (1988)
1335–1354.
[11] R.G. Ghanem, P.D. Spanos, Stochastic Finite Elements: A Spectral Approach, Springer-Verlag, Berlin, 1991, http://dx.doi.org/10.1007/
978-1-4612-3094-6.
[12] J. Wang, Z. Lu, L. Wang, An efficient method for estimating failure probability bounds under random-interval mixed uncertainties by
combining line sampling with adaptive kriging, Internat. J. Numer. Methods Engrg. 124 (2) (2023) 308–333.
[13] H. Zhang, B. Huang, A new homotopy-based approach for structural stochastic analysis, Probab. Eng. Mech. 55 (2019) 42–53.
[14] H. Zhang, B. Huang, Y. Liu, X. Xiang, Z. Wu, A new stochastic residual error based homotopy approach for stability analysis of
structures with large fluctuation of random parameters, Internat. J. Numer. Methods Engrg. 124 (1) (2023) 183–216.
[15] J.T. Oden, S. Prudhomme, Estimation of modeling error in computational mechanics, J. Comput. Phys. 182 (2002) 496–515.
[16] L. Mathelin, O. le Maitre, Dual-based a posteriori error estimate for stochastic finite element methods, Commun. Appl. Math. Comput.
Sci. 2 (1) (2007) 83–115.
[17] D. Guignard, F. Nobile, A posteriori error estimation for the Stochastic Collocation Finite Element Method, SIAM J. Numer. Anal.
56 (5) (2018) 3121–3143.
[18] X. Li, X. Yang, Error estimates of Finite Element Methods for stochastic fractional differential equations, J. Comput. Math. 35 (3)
(2017) 346–362.
[19] S. Clenet, N. Ida, Error estimation in a stochastic finite element method in electrokinetics, Internat. J. Numer. Methods Engrg. 81 (11)
(2010) 1417–1438.
[20] C.A. Cornell, A probability-based structural code, Am. Concr. Inst. J. 66 (1969) 974–985, http://dx.doi.org/10.14359/7446.
[21] M. Tichý, First-order third-moment reliability method, Struct. Saf. 16 (1994) 189–200, http://dx.doi.org/10.1016/0167-4730(94)00021-H.
[22] T. Ono, H. Idota, Development of High Order Moment Standardization Method into structural design and its efficiency (in Japanese),
J. Struct. Constr. Eng. (Trans. AIJ) 365 (1986) 40–47, http://dx.doi.org/10.3130/aijsx.365.0_40.
[23] Y.-G. Zhao, Z.-H. Lu, Fourth-moment standardization for structural reliability assessment, J. Struct. Eng. 133 (2007) 916–924,
http://dx.doi.org/10.1061/(asce)0733-9445(2007)133:7(916).
[24] L.W. Zhang, An improved fourth-order moment reliability method for strongly skewed distributions, Struct. Multidiscip. Optim. 62
(2020) 1213–1225, http://dx.doi.org/10.1007/s00158-020-02546-y.
[25] Z.-H. Lu, D.-Z. Hu, Y.-G. Zhao, Second-order fourth-moment method for structural reliability, J. Eng. Mech. 143 (2017) 06016010,
http://dx.doi.org/10.1061/(asce)em.1943-7889.0001199.
[26] M. Kamiński, M. Solecka, Optimization of the truss-type structures using the generalized perturbation-based Stochastic Finite Element
Method, Finite Elem. Anal. Des. 63 (2013) 69–79, http://dx.doi.org/10.1016/j.finel.2012.08.002.
[27] J.E. Hurtado, A.H. Barbat, Monte Carlo techniques in computational stochastic mechanics, Arch. Comput. Methods Eng. 5 (1998)
3–30, http://dx.doi.org/10.1007/bf02736747.
[28] D. Settineri, G. Falsone, An APDM-based method for the analysis of systems with uncertainties, Comput. Methods Appl. Mech. Engrg.
278 (2014) 828–852, http://dx.doi.org/10.1016/j.cma.2014.06.014.
[29] G. Falsone, R. Laudani, Matching the principal deformation mode method with the probability transformation method for the analysis
of uncertain systems, Internat. J. Numer. Methods Engrg. 118 (2019) 395–410, http://dx.doi.org/10.1002/nme.6018.
[30] R. Laudani, G. Falsone, Use of the probability transformation method in some random mechanic problems, ASCE-ASME J. Risk
Uncertain. Eng. Syst. A 7 (2021) http://dx.doi.org/10.1061/ajrua6.0001111.
37
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
[31] R. Laudani, G. Falsone, An evolutive probability transformation method for the dynamic stochastic analysis of structures, Probab. Eng.
Mech. 69 (2022) 103313, http://dx.doi.org/10.1016/J.PROBENGMECH.2022.103313.
[32] M.M. Kamiński, P. Świta, Generalized stochastic finite element method in elastic stability problems, Comput. Struct. 89 (2011)
1241–1252, http://dx.doi.org/10.1016/j.compstruc.2010.08.009.
[33] M.M. Kamiński, On semi-analytical probabilistic finite element method for homogenization of the periodic fiber-reinforced composites,
Internat. J. Numer. Methods Engrg. 86 (2011) 1144–1162, http://dx.doi.org/10.1002/nme.3097.
[34] M.R. Rajashekhar, B.R. Ellingwood, A new look at the response surface approach for reliability analysis, Struct. Saf. 12 (1993)
205–220, http://dx.doi.org/10.1016/0167-4730(93)90003-J.
[35] X.L. Guan, R.E. Melchers, Effect of response surface parameter variation on structural reliability estimates, Struct. Saf. 23 (2002)
429–444, http://dx.doi.org/10.1016/S0167-4730(02)00013-9.
[36] C. Bucher, Metamodels of optimal quality for stochastic structural optimization, Probab. Eng. Mech. 54 (2018) 131–137, http:
//dx.doi.org/10.1016/j.probengmech.2017.09.003.
[37] C.G. Bucher, U. Bourgund, A fast and efficient response surface approach for structural reliability problems, Struct. Saf. 7 (1990)
57–66, http://dx.doi.org/10.1016/0167-4730(90)90012-E.
[38] I. Babuŝka, R. Tempone, G.E. Zouraris, Solving elliptic boundary value problems with uncertain coefficients by the finite element
method: The stochastic formulation, Comput. Methods Appl. Mech. Engrg. 194 (2005) 1251–1294, http://dx.doi.org/10.1016/j.cma.
2004.02.026.
[39] A. Keese, H.G. Matthies, Hierarchical parallelisation for the solution of stochastic finite element equations, Comput. Struct. 83 (2005)
1033–1047, http://dx.doi.org/10.1016/j.compstruc.2004.11.014.
[40] M. Kleiber, T. Hien, The Stochastic Finite Element Method, Wiley, New York, 1992.
[41] W.K. Liu, T. Belytschko, A. Mani, Random field finite elements, Internat. J. Numer. Methods Engrg. 23 (1986) 1831–1845,
http://dx.doi.org/10.1002/nme.1620231004.
[42] Z. Zheng, H. Dai, Structural stochastic responses determination via a sample-based stochastic finite element method, Comput. Methods
Appl. Mech. Engrg. 381 (2021) http://dx.doi.org/10.1016/j.cma.2021.113824.
[43] M.M. Kamiński, The Stochastic Perturbation Method for Computational Mechanics, Wiley, Chichester, 2013, http://dx.doi.org/10.1002/
9781118481844.
[44] C. Wang, Z. Qiu, D. Wu, Numerical analysis of uncertain temperature field by stochastic finite difference method, Sci. China Phys.
Mech. Astron. 57 (2014) 698–707, http://dx.doi.org/10.1007/s11433-013-5235-x.
[45] M. Kamiński, R.L. Ossowski, Navier–Stokes problems with random coefficients by the weighted least squares technique stochastic
finite volume method, Arch. Civ. Mech. Eng. 14 (2014) 745–756, http://dx.doi.org/10.1016/j.acme.2013.12.004.
[46] R. Honda, Stochastic BEM with spectral approach in elastostatic and elastodynamic problems with geometrical uncertainty, Eng. Anal.
Bound. Elem. 29 (2005) 415–427, http://dx.doi.org/10.1016/j.enganabound.2005.01.007.
[47] C. Ding, X. Hu, X. Cui, G. Li, Y. Cai, K.K. Tamma, Isogeometric generalized nth order perturbation-based stochastic method for
exact geometric modeling of (composite) structures: Static and dynamic analysis with random material parameters, Comput. Methods
Appl. Mech. Engrg. 346 (2019) 1002–1024, http://dx.doi.org/10.1016/j.cma.2018.09.032.
[48] T.J.R. Hughes, J.A. Cottrell, Y. Bazilevs, Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement,
Comput. Methods Appl. Mech. Engrg. 194 (2005) 4135–4195, http://dx.doi.org/10.1016/j.cma.2004.10.008.
[49] T.D. Hien, H.C. Noh, Stochastic isogeometric analysis of free vibration of functionally graded plates considering material randomness,
Comput. Methods Appl. Mech. Engrg. 318 (2017) 845–863, http://dx.doi.org/10.1016/j.cma.2017.02.007.
[50] K. Li, W. Gao, D. Wu, C. Song, T. Chen, Spectral stochastic isogeometric analysis of linear elasticity, Comput. Methods Appl. Mech.
Engrg. 332 (2018) 157–190, http://dx.doi.org/10.1016/j.cma.2017.12.012.
[51] B.M. Pokusiński, M.M. Kamiński, Lattice domes reliability by the perturbation-based approaches vs. semi-analytical method, Comput.
Struct. 221 (2019) 179–192, http://dx.doi.org/10.1016/j.compstruc.2019.05.012.
[52] M. Kamiński, Potential problems with random parameters by the generalized perturbation-based stochastic finite element method,
Comput. Struct. 88 (2010) 437–445, http://dx.doi.org/10.1016/j.compstruc.2009.12.005.
[53] M.M. Kamiński, On the dual iterative stochastic perturbation-based finite element method in solid mechanics with Gaussian uncertainties,
Internat. J. Numer. Methods Engrg. 104 (2015) 1038–1060, http://dx.doi.org/10.1002/nme.4976.
[54] J. Forsberg, L. Nilsson, On polynomial response surfaces and kriging for use in structural optimization of crashworthiness, Struct.
Multidiscip. Optim. 29 (2005) 232–243, http://dx.doi.org/10.1007/s00158-004-0487-8.
[55] B. Xia, H. Lü, D. Yu, C. Jiang, Reliability-based design optimization of structural systems under hybrid probabilistic and interval
model, Comput. Struct. 160 (2015) 126–134, http://dx.doi.org/10.1016/j.compstruc.2015.08.009.
[56] C. Bucher, T. Most, A comparison of approximate response functions in structural reliability analysis, Probab. Eng. Mech. 23 (2008)
154–163, http://dx.doi.org/10.1016/j.probengmech.2007.12.022.
[57] B. Xia, D. Yu, J. Liu, Transformed perturbation stochastic finite element method for static response analysis of stochastic structures,
Finite Elem. Anal. Des. 79 (2014) 9–21, http://dx.doi.org/10.1016/j.finel.2013.10.003.
[58] L. Faravelli, Response-surface approach for reliability analysis, J. Eng. Mech. 115 (1989) 2763–2781, http://dx.doi.org/10.1061/(asce)
0733-9399(1989)115:12(2763).
[59] B. Xia, D. Yu, Change-of-variable interval stochastic perturbation method for hybrid uncertain structural-acoustic systems with random
and interval variables, J. Fluids Struct. 50 (2014) 461–478, http://dx.doi.org/10.1016/j.jfluidstructs.2014.07.005.
[60] B. Huang, Q.S. Li, A.Y. Tuan, H. Zhu, Recursive approach for random response analysis using non-orthogonal polynomial expansion,
Comput. Mech. 44 (2009) 309–320, http://dx.doi.org/10.1007/s00466-009-0375-6.
[61] S. Rahman, A polynomial dimensional decomposition for stochastic computing, Internat. J. Numer. Methods Engrg. 76 (2008)
2091–2116, http://dx.doi.org/10.1002/nme.2394.
38
B. Pokusiński and M. Kamiński Computer Methods in Applied Mechanics and Engineering 410 (2023) 115993
[62] M. Kamiński, Uncertainty analysis in solid mechanics with uniform and triangular distributions using stochastic perturbation-based
Finite Element Method, Finite Elem. Anal. Des. 200 (2022) 103648, http://dx.doi.org/10.1016/j.finel.2021.103648.
[63] B.W. Char, K.O. Geddes, G.H. Gonnet, B.L. Leong, M.B. Monagan, S.M. Watt, First Leaves: A Tutorial Introduction to Maple V,
Springer-Verlag, Berlin-Heidelberg, 1992, http://dx.doi.org/10.1007/978-1-4615-6996-1.
[64] J.S. Bendat, A.G. Piersol, Random Data: Analysis and Measurement Procedures, Wiley, New York, 1971, http://dx.doi.org/10.1002/
9781118032428.
[65] W. Feller, An Introduction to Probability Theory and its Applications, Wiley, New York, 1965.
[66] E. Vanmarcke, Random Fields: Analysis and Synthesis, MIT Press, Cambridge, 1983.
[67] N.T. Kottegoda, R. Rosso, Applied Statistics for Civil and Environmental Engineers, Blackwell, Cichester, 2008.
[68] D. Sokołowski, M.M. Kamiński, Homogenization of carbon/polymer composites with anisotropic distribution of particles and stochastic
interface defects, Acta Mech. 229 (2018) 3727–3765, http://dx.doi.org/10.1007/s00707-018-2174-7.
[69] A.H. Nayfeh, Perturbation Methods, Wiley, New York, 1973, http://dx.doi.org/10.1002/9783527617609.
[70] E.J. Hinch, Perturbation Methods, Cambridge University Press, Cambridge, 1991, http://dx.doi.org/10.1017/cbo9781139172189.
[71] Å. Björck, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, 1996, http://dx.doi.org/10.1137/1.9781611971484.
[72] J. Wolberg, Data Analysis using the Method of Least Squares: Extracting the Most Information from Experiments, Springer, Berlin,
2005, http://dx.doi.org/10.1007/3-540-31720-1.
[73] C. Runge, Über empirische funktionen und die interpolation zwischen äquidistanten ordinaten, Z. Math. Phys. 46 (1901) 224–243.
[74] H. Akaike, A new look at the statistical model identification, IEEE Trans. Automat. Control 19 (1974) 716–723, http://dx.doi.org/10.
1109/TAC.1974.1100705.
[75] G. Schwarz, Estimating the dimension of a model, Ann. Statist. 6 (1978) 461–464, http://dx.doi.org/10.1214/aos/1176344136.
[76] S.L. Sclove, Application of model-selection criteria to some problems in multivariate analysis, Psychometrika 52 (1987) 333–343,
http://dx.doi.org/10.1007/BF02294360.
[77] J.H. Steiger, J.C. Lind, Statistically based tests for the number of common factors, in: Proc. Annu. Meet. Psychom. Soc. Struct. Equ.
Model., Iowa City, 1980.
[78] European Committee for Standardization, EN 1990: Eurocode - Basis of Structural Design, Brussels, 2002.
[79] J.T. Oden, Finite Elements of Nonlinear Continua, McGraw-Hill, New York, 1972.
[80] D.R.J. Owen, E. Hinton, Finite Elements in Plasticity – Theory and Practice, Pineridge Press, Swansea, 1980.
[81] O.C. Zienkiewicz, R.L. Taylor, D.D. Fox, The Finite Element Method for Solid and Structural Mechanics, seventh ed., Elsevier,
Amsterdam, 2014, http://dx.doi.org/10.1016/C2009-0-26332-X.
[82] C. Schittich, G. Staib, D. Balkow, M. Schuler, W. Sobek, Glass Construction Manual, second ed., Munich, 2007.
[83] M. Ostoja-Starzewski, Material spatial randomness: from statistical to representative volume elements, Probab. Eng. Mech. 21 (2)
(2006) 112–132.
[84] M. Kamiński, On Bhattacharyya relative entropy in a homogenization of composite materials, Internat. J. Numer. Methods Engrg. 124
(2) (2023) 534–541.
39