Mathematical Foundations: Abstract This Chapter Presents Basic Mathematical Concepts and Tools For The
Mathematical Foundations: Abstract This Chapter Presents Basic Mathematical Concepts and Tools For The
Mathematical Foundations
Abstract This chapter presents basic mathematical concepts and tools for the
development of finite element method. The concept of functional, associated with the
variational formulation, and the function, associated with the boundary value prob-
lems, is discussed; the method of calculus of variation which transfers the variational
formulation into the boundary value problem is also presented. A brief discussion of
the numerical solution methods to handle the variational formulation and the bound-
ary value problem is presented. Some numerical examples are given to show the
convergence efficiency of the numerical methods.
2.1 Introduction
The general statement of the calculus of variation and the relationship between a
functional and the function associated with its extremum is discussed in this section.
In terms of physical problems, we try to obtain the equilibrium equation of a bound-
ary value problem which governs the function ψ(t, xi ), i = 1, 2, 3, through the
minimization of its associated functional (ψ).
Assume that the function ψ(xi ), i = 1, 2, 3, is defined and is a function of the
space variables x1 , x2 , and x3 satisfy the equilibrium equation
L[ψ(x1 , x2 , x3 )] = 0. (2.3.1)
Bi (ψ) = gi i = 1, 2, .. (2.3.2)
Now, consider a variational function u which meets the continuity conditions and
satisfies the homogeneous boundary conditions
Bi (u) = 0 i = 1, 2, . . . (2.3.4)
Now, if ψ is the true solution of Eq. (2.3.1), (ψ + u) may be made to represent
an arbitrary admissible function which satisfies the real non-homogeneous bound-
ary conditions. For fixed u, the variational parameter may be changed to make a
one-parameter family of admissible functions. Since u satisfies the homogeneous
boundary conditions, it follows that
Bi [ψ + u] = gi i = 1, 2, . . . (2.3.5)
Substituting the family of admissible functions ψ+u in Eq. (2.3.3), the functional
may be changed by varying the variational parameter . The extremum of the
functional is obtained from the following rule:
∂[ψ + u]
|=0 = 0. (2.3.6)
∂
This rule holds for every possible family of (ψ + u) for the arbitrary variational
function u. The rule expressed in Eq. (2.3.6) is the basic and formal procedure of the
method of calculus of variation. The basic approach in the treatment of Eq. (2.3.6) is
10 2 Mathematical Foundations
integration by parts such that the arbitrary function u is factored out in the resulting
equations. Since u is an arbitrary function, the remaining part is set to zero. This
provides the boundary value problem and the associated natural boundary conditions.
To describe the method, a few general types of functional are considered, and,
using the method of calculus of variation, their associated boundary value problems
are obtained.
Let us consider a functional being a function of y(x) and its first derivative y (x)
as given below, x2
[y(x)] = F(x, y, y )d x. (2.4.1)
x1
It is further assumed that the function F is continuous in the interval x1 and x2 , and
its derivative up to the first order exists and is continuous.
Among all functions y(x) which satisfy the continuity conditions and the given
boundary conditions, which we call the class of admissible functions, there is only
one special function y(x) which minimizes the functional .
In order to determine y(x), an arbitrary function u(x) is selected such that it is
continuous in the interval x1 and x2 along with its first derivative and satisfies the
homogeneous boundary conditions
where ȳ(x) satisfies all the continuity conditions and the given boundary conditions.
The parameter is a positive arbitrary variational parameter and is selected as suf-
ficiently small so that the function ȳ(x) is as close as possible to the function y(x).
Therefore, since y(x) makes the functional at a relative minimum, for = 0 the
following inequality exists:
(y + u) ≥ (y). (2.4.5)
This suggests that f () is at a relative minimum when = 0, and since f () =
(y + u) is differentiable, the necessary condition for f () to be at a relative
minimum is therefore
∂ f ()
|=0 = 0. (2.4.7)
∂
Introducing Eq. (2.4.4) into Eq. (2.4.1) yields
x2
(y + u) = F[x, (y + u), (y + u )]d x. (2.4.8)
x1
Setting = 0 yields
∂(y + u) x2 ∂ F(x, y, y ) d ∂ F(x, y, y )
|=0 = [ − ]u(x)d x
∂ x1 ∂y dx ∂ y
∂F
+ u|xx21 = 0. (2.4.11)
∂y
∂F
= 0 at x = x1
∂ y
∂F
= 0 at x = x2 . (2.4.13)
∂ y
Equation (2.4.12) is equivalent to the boundary value problem. Expanding the dif-
ferential term yields
∂F ∂2 F dy ∂ 2 F d 2 y ∂2 F
− − − = 0. (2.4.14)
∂y ∂x∂ y d x ∂ y∂ y d x 2 ∂ y 2
Equation (2.4.14) is the necessary condition for a function y(x) to minimize the
functional (y) given by Eq. (2.4.1).
Consider a functional as a function of the variable x, function y(x), and higher order
derivatives of function y n (x), defined in the interval [x1 , x2 ] as
x2
[y(x)] = F(x, y, y , ....y n )d x. (2.5.1)
x1
The function y(x) is (n) times differentiable with respect to x. The boundary condi-
tions are given for the function and its derivatives up to the order n − 1, as
where y1 , y2 , ...., y1k , and y2k are known functions on the boundary. That is, the
boundary conditions are specified for the function itself and its derivatives up to order
(n − 1). Consider a variational function u(x), and construct the function ȳ(x) as
The variational function u(x) is an arbitrary function with the following properties:
1. The function u(x) and its derivatives up to order n are continuous in the interval
(x1 , x2 ).
2. The function u(x) and all of its derivatives up to order (n − 1) satisfy the homo-
geneous boundary conditions.
2.5 Higher Order Derivatives 13
u(x1 ) = 0 · · · ·u k (x1 ) = 0
u(x2 ) = 0 · · · ·u k (x2 ) = 0 k = 1, 2, · · · · (n − 1).
Let = 0, yields
∂ x2 ∂F ∂F ∂F ∂F
|=0 = ( u + u + u + ... + n u n )d x. (2.5.6)
∂ x1 ∂y ∂y ∂y ∂y
Higher order derivatives are similarly reduced to factors of u(x) and a series of
terms which should be evaluated at the boundaries x = x1 and x = x2 . Since the
arbitrary variational function u(x) and all it’s derivatives up to (n) are continuous in
the interval (x1 , x2 ) and vanish at x1 and x2 , Eq. (2.5.6) therefore reduces to
x2
∂(y + u) ∂F d ∂F d2 ∂ F
|=0 = [ − +
∂ x1 ∂ y d x ∂ y d x 2 ∂ y
d ∂F
n ∂F ∂F d ∂F
+ . . . (−1)n n ]u(x)d x + u|xx21 + u |xx21 − ( )u|x2
dx ∂y n ∂y ∂y d x ∂ y x1
+ .... (2.5.9)
This equation is valid for all possible arbitrary functions u(x) with the given
properties. Therefore, if the integral equation should be zero for all possible functions
u(x), the following expressions must be identically equal to zero :
∂F d ∂F d2 ∂ F n d
n ∂F
− + + ...(−1) = 0. (2.5.10)
∂y d x ∂ y d x 2 ∂ y d x n ∂ yn
14 2 Mathematical Foundations
∂F d ∂F d2 ∂ F n−1 d
n−1 ∂ F
− ( ) + ( ) − · · · + (−1) =0
∂ y d x ∂ y d x 2 ∂ y d x n−1 ∂ y n
∂F d ∂F n−2 d
n−2 ∂ F
− ( ) + · · ·(−1) =0
∂ y d x ∂ y d x n−2 ∂ y n
····
····
····
∂F
=0 at x = x1 , and x = x2 . (2.5.11)
∂ y n−1
Since the function F is known, Eq. (2.5.10) results in the boundary value problem
governing the function y(x). This function minimizes the functional given by
Eq. (2.5.1). Equation (2.5.11) are the natural boundary conditions derived through
the integrations by parts of the functional.
where u x and u y are the partial derivatives of the function u with respect to x and
y. We assume the functional F to be at least differentiable up to the second order
and the extremizing function u(x, y) differentiable up to the first order. We further
assume that the class of admissible functions u(x, y) has the following properties;
a-u(x, y) is prescribed on boundary curve C.
b-u(x, y) and it’s partial derivatives with respect to x and y up to the order one,
are continuous in the domain D.
We assume that the function u(x, y) is the only function among the class of
admissible functions which minimizes the functional . Now, the variational function
g(x, y) with the following properties is considered
a-g(x, y) = 0 for all the boundary points on C.
b-g(x, y) is continuous and differentiable in D.
We construct the variational function as
Substituting in Eq. (2.6.1) and carrying out the partial derivatives gives
∂ ∂
= F x, y, (u + g), (u + g)x , (u + g) y d xd y. (2.6.3)
∂ ∂ D
∂ ∂F ∂ ∂F ∂F
( g) = ( )g + gx
∂x ∂u x ∂x ∂u x ∂u x
∂ ∂F ∂ ∂F ∂F
( g) = ( )g + gy . (2.6.5)
∂y ∂u y ∂y ∂u y ∂u y
Substituting Eq. (2.6.5) in Eq. (2.6.4), the last two terms become
∂F ∂F ∂ ∂F ∂ ∂F
( gx + g y )d xd y = ( g) + ( g) d xd y
D ∂u x ∂u y D ∂x ∂u x ∂ y ∂u y
∂ ∂F ∂ ∂F
− ( )+ ( ) g d xdy. (2.6.6)
D ∂x ∂u x ∂ y ∂u y
∂ ∂F
Here, ( ) is called the total partial derivative with respect to x, and in per-
∂x ∂u x
forming the partial derivatives with respect to x, the variable y remains constant, that
is
∂ ∂F ∂2 F ∂2 F ∂ 2 F ∂u x ∂ 2 F ∂u y
( )= + ux + + (2.6.7)
∂x ∂u x ∂x∂u x ∂u x ∂u ∂u 2x ∂x ∂u x ∂u y ∂x
and
∂ ∂F ∂2 F ∂2 F ∂ 2 F ∂u x ∂ 2 F ∂u y
( )= + uy + + . (2.6.8)
∂ y ∂u y ∂ y∂u y ∂u y ∂u ∂u y ∂u x ∂ y ∂u 2y ∂ y
Now, using Green’s integral theorem, the area integral is transferred into the line
integral as
∂M ∂N
( + ) d xd y = (N dy − Md x). (2.6.9)
D ∂y ∂x C
The right-hand side of this equation is integrated over the boundary curve C. From
Eqs. (2.6.10) and (2.6.6), we get
∂F ∂F ∂ ∂F ∂ ∂F
( gx + g y )d xd y = − [ ( )+ ( )]g d xdy
D ∂u x ∂u y D ∂x ∂u x ∂ y ∂u y
∂F ∂F
+ ( dy − d x)g. (2.6.11)
C ∂u x ∂u y
Substituting Eq. (2.6.11) into Eq. (2.6.4), and letting the expression Eq. (2.6.4)
be zero, we obtain
∂F ∂ ∂F ∂ ∂F
[ − ( )− ( )]g d xdy
D ∂u ∂x ∂u x ∂ y ∂u y
∂F ∂F
+ ( dy − d x)g = 0. (2.6.12)
C ∂u x ∂u y
∂F ∂ ∂F ∂ ∂F
− − =0 in D. (2.6.13)
∂u ∂x ∂u x ∂ y ∂u y
This equation is called the Euler equation, which is the boundary value problem
associated with the functional Eq. (2.6.1). The natural boundary condition is
∂F ∂F
( dy − d x) = 0 on C (2.6.14)
C ∂u x ∂u y
Consider a cantilever beam of arbitrary cross-sectional area and length L and bending
stiffness E I subjected to a bending force F, as shown in Fig. 2.1.
We will now write the potential energy function of the beam under the applied
force F, and by minimization of this function, using the method of calculus of
variation, we obtain the Euler equation for the equilibrium of the beam.
The potential energy of the beam is the sum of two parts, internal strain energy,
and the strain energy of the external forces, as
V =U + (2.7.1)
2.7 Cantilever Beam 17
where V is the total potential energy of the beam, U is the internal strain energy and
is the strain energy of the external forces.
From the strength of the material, it is recalled that the internal strain energy of a
beam subjected to a bending force is obtained from the following relation
L M 2d x
U= (2.7.2)
0 2E I
where M is the bending moment distribution along the beam and d x is an element
of the length of the beam the associated strain energy of which is dU . From the
elementary beam theory, the bending moment M and the curvature 1/R are related
by
EI
M= . (2.7.3)
R
Substituting M from Eq. (2.7.3) into Eq. (2.7.2) yields
L EI
U= d x. (2.7.4)
0 2R 2
The radius of curvature R and the beam’s elastic deflection equation are related as
(1 + y 2 )3/2
R= . (2.7.5)
|y |
This equation is valid for an arbitrarily small deflection, provided that plastic defor-
mation does not occur.
The potential energy of external force F is
18 2 Mathematical Foundations
= −F y1 (2.7.7)
where y1 is the deflection under the force F in y-direction and the negative sign
indicates that work is done on the system. The total potential energy from Eq. (2.7.1),
after substituting from Eqs. (2.7.7) and (2.7.6), becomes
L
V = −F y1 + 1
2 E I (y )2 d x. (2.7.8)
0
or
L L
V ( ȳ) = −F(y1 + η1 ) + 1
2 EI[ (y )2 d x + (2 (η )2 d x
0 0
L
+2 y (η ) d x]2
0
Setting = 0 yields
2.7 Cantilever Beam 19
∂V L
|=0 = −Fη1 + E I y η dx = 0
∂ 0
where in the above equation is set equal to zero, and according to the rule of calculus
of variation, the remaining expression is equal to zero. Two times integrations by
parts give
L
−Fη(L) + E I y η |0L − E I y η|0L + E I y I V η d x = 0.
0
In order that Eq. (2.7.10) vanishes for all the admissible functions η(x), the following
conditions must hold
yIV = 0 (2.7.11)
and
F + E I y (L) = 0 y (0) = 0
y (L) = 0 E I y (0) = 0. (2.7.12)
The conditions Eq. (2.7.12) are known as the natural boundary conditions, since
they are necessary to make the potential energy a minimum. Equation (2.7.11) is
known as the Euler equation and is the equilibrium equation of the beam which
minimizes the total potential energy equation under the given boundary conditions.
A general solution of Eq. (2.7.11) is
y = C0 + C1 x + C2 x 2 + C3 x 3 (2.7.13)
where C0 , C1 , C2 and C3 are the constants of integration. For the given essential
boundary conditions, we have
C0 = C1 = 0. (2.7.14)
The other two force boundary conditions related to the moment and shear force on
x = L give
FL −F
C2 = C3 = (2.7.15)
2E I 6E I
and therefore, the deflection equation of the beam becomes
20 2 Mathematical Foundations
FL 2 F
y= x − x3 (2.7.16)
2E I 6E I
or
F x2 x
y= (L − ) (2.7.17)
2E I 3
L 2m [ψ] = f (2.8.1)
Bi [ψ] = gi i = 1, 2, . . . , 2m (2.8.2)
= (ψ). (2.8.3)
An approximate solution of Eq. (2.8.1) may have the following linear form
n
∗
ψ = φ0 + Cjφj (2.8.4)
j=1
where the functions φ j are linearly independent known functions of the variables in
the solution domain D satisfying the homogeneous boundary conditions. Function
φ0 is a known function of the variables satisfying the nonhomogeneous boundary
conditions, and the constants C j are the undetermined parameters. With the above
definitions, the functions φ j in Eq. (2.8.4) satisfy the boundary conditions
Bi [φ0 ] = gi i = 1, . . . , 2m
Bi [φ j ] = 0 i = 1, . . . , 2m, j = 1, . . . , n. (2.8.5)
Thus, the function ψ of Eq. (2.8.1) satisfies all the boundary conditions for arbi-
trary values of the constant coefficients C j .
2.8 Approximate Techniques 21
When the trial solution Eq. (2.8.4), which satisfies Eq. (2.8.5), is inserted into
Eq. (2.8.1), the residual equation R is
r
R = f − L 2m [ψ ∗ ] = f − L 2m [φ0 + C j φ j ]. (2.8.6)
j=1
For the exact solution, the residual R has to be identically zero. For a proper
approximate solution, it should be restricted within a small tolerance. The classical
weighted residual methods are as follows:
2.8.1.1 Collocation
The solution domain D is considered and n arbitrary points are selected inside
the domain, usually with a known geometric pattern. The residual R of equation
Eq. (2.8.6) is set equal to zero at n points in the domain D. That is
n
R = f − L 2m [φ0 + C j φ j ] = 0. (2.8.7)
j=1
This provides n simultaneous algebraic equations for the constants C j . The locations
of the points are arbitrary but, as mentioned, are usually such that D is covered more
or less uniformly by a simple pattern.
2.8.1.2 Subdomain
2.8.1.3 Galerkin
The complete solution domain is considered, and the residue R is made orthogonal
with respect to the approximating functions φ j over the whole domain as;
φk Rd D = 0 k = 1, . . . , n. (2.8.9)
D
This equation provides a system of n algebraic equations for the n constant coef-
ficients C j .
The main difference between the collocation and subdomain methods and the
Galerkin method is that, in the collocation and subdomain methods, the solution
domain is divided into a number of elements and nodal points, while in the Galerkin
method the solution domain is considered as a whole. This is called the traditional
Galerkin method.
Similar to the Galerkin method, the complete solution domain is Considered, and
the integral of the square of the residue is minimized with respect to the constant
coefficients C j as
∂
R 2 d D = 0 k = 1, . . . , n. (2.8.10)
∂Ck D
Let be a functional such that the extremum problem for is equivalent to the
equilibrium problem. The Ritz method consists of treating the extremum problem
directly by inserting the trial family Eq. (2.8.6) into and setting
∂
=0 j = 1, . . . , n. (2.8.12)
∂C j
These n equations are solved for the constants C j , and when multiplied by their
corresponding functions ψ, represent an approximate solution to the extremum prob-
lem. It is an approximate solution, because it gives a stationary value only for the
class of functions ψ which are part of the trial family Eq. (2.8.5).
The most important step in the above discussion is the selection of the trial family
Eq. (2.8.5). The purpose of the above criterion is merely to pick the best approxima-
tion from a given family.
where is the contour bounding the region D and the subscript in Eq. (2.9.1) indicates
the derivative with respect to x or y. Let ψ be the exact solution to this problem, and
(ψ) = m the value of the minimum. If we can find a function ψ̄(x, y) which satisfies
the boundary condition Eq. (2.9.2) and for which the value of functional (ψ̄) is very
close to m, then ψ̄ is a good approximation for the minimum of functional Eq. (2.9.1).
On the other hand, if we can find a minimizing sequence ψ̄, i.e., a sequence of
functions satisfying the condition Eq. (2.9.2), and for which (ψ¯n ) approaches m, it
would be expected that such a sequence would converge to the solution.
Ritz proposed a classical method in which one can find ψ̄, a function which
minimizes the integral Eq. (2.9.1), systematically. To describe the Ritz method, let us
24 2 Mathematical Foundations
This function is chosen in such a way that, regardless of the values of an , ψ satisfies
the boundary condition Eq. (2.9.2). The Ritz method is then based on calculating
the coefficients a1 through an for which ψ of equation Eq. (2.9.3) minimizes the
integral Eq. (2.9.1). Upon substitution of equations Eq. (2.9.3) into Eq. (2.9.1), and
performing the necessary differentiation and integration, we find that is converted
into a function of the coefficients a1 , a2 , ..., an . That is, = (a1 , a2 , ...., an ). To
minimize this function, Ritz proved that the coefficients an must satisfy the following
system of equations:
∂
=0 k = 1, 2, . . . , n. (2.9.4)
∂ak
Let us assume that the solution to Eq. (2.9.4) for n coefficients an is a¯1 , a¯2 , ...., a¯n .
Substituting this solution into Eq. (2.9.3) for ψ, we obtain
Let ψ¯n be the nth. approximation giving the last value for integral in comparison
with all the functions up to the nth. family. Since each successive family contains all
the functions of the preceding, i.e., for each successive problem the class of admissible
functions is broader, it is clear that the successive minimums are non-increasing,
y(x1 ) = 0
y(x2 ) = 0 (2.9.9)
2.9 Further Notes on the Ritz and Galerkin Methods 25
where L is a mathematical operator and f (x) is a known function. Note that nonho-
mogeneous boundary conditions of
y(x1 ) = y1
y(x2 ) = y2 (2.9.10)
can be transformed to the homogeneous conditions Eq. (2.9.9) with a proper change
of variables.
Let us choose a set of continuous linearly independent functions wi (x) in the
interval (x1 − x2 ) that satisfy the boundary conditions Eq. (2.9.9), that is,
n
yn = ai wi (x) (2.9.12)
i=1
When the number of functions wi (x) tends to infinity, (n −→ ∞), the solution
tends to the exact solution. In order to solve for the coefficients ai , the linear set of
Eq. (2.9.13) has to be solved for the unknowns ai (for discussion and solution of such
a system of equations, one may refer to Kantrovich and Krylov [1]). In practical cases,
a finite number of series Eq. (2.9.12) are considered from which, upon substitution
in Eq. (2.9.13), a finite set of linear equations are obtained to solve for ai .
The functions wi (x) are usually selected in polynomial or trigonometric forms as
(x − x1 )(x − x2 ) (x − x1 )2 (x − x2 ) (x − x1 )n (x − x2 )
n π(x − x1 )
sin n = 1, 2, . . . (2.9.14)
x2 − x1
It is obvious that the origin of the coordinate system can be transformed to x1 and,
thus, in Eq. (2.9.14) x1 = 0.
The Galerkin method is a powerful tool for obtaining an approximate solution for
the ordinary differential equations of any order n, systems of differential or partial
differential equations.
26 2 Mathematical Foundations
We will now apply the Ritz method to the solution of an ordinary differential equation
of the second order [1]
d
L(y) = ( py ) − qy − f = 0 (2.10.1)
dx
under the homogeneous boundary conditions
subjected to the boundary conditions Eq. (2.10.2). We furthermore assume that in the
given interval the following inequalities are satisfied:
kπx
φk = sin
L
φk = (L − x)x k k = 1, 2, . . . , n. (2.10.5)
We now apply the Ritz method to obtain the minimum of the functional Eq. (2.10.3)
using the series of linear combinations of the functions φk . We seek a solution in the
form of
n
yn = a k φk . (2.10.6)
k=1
But
n
n
n
( ak φk )2 = ak as φk φs . (2.10.8)
k=1 k=1 s=1
Therefore
L
n
n
n
n
n
(yn ) = [p ak as φk φs +q a k a s φk φs + 2 f ak φk ]d x.
0 k=1 s=1 k=1 s=1 k=1
(2.10.9)
Calling
L
αk,s = αs,k = ( pφk φs + qφk φs )d x
0
L
βk = f φk d x. (2.10.10)
0
We have
n
n
n
(yn ) = αk,s ak as + 2 βk ak . (2.10.11)
k=1 s=1 k=1
d(yn )
n
1
2 = αk,s ak + βs = 0 (2.10.12)
das
k=1
or
d(yn )
n L L
1
2 = ( pφk φs + qφk φs )ak d x + f φs d x = 0. (2.10.13)
das
k=1 0 0
or, finally
L
( pyn φs + qyn φs + f φs )d x = 0 s = 1, 2, . . . , n. (2.10.15)
0
The first term in the right-hand side of the above equation vanishes, as φs vanishes
at 0 and L, and thus
L L d
pyn φs d x = − ( pyn )φs d x. (2.10.17)
0 0 dx
or, finally
L
L(yn )φs d x = 0. (2.10.19)
0
Noticed that application of the Ritz method in this case reduced the problem to that
of the Galerkin method.
In the previous section, we discussed application of the Ritz method to problems with
homogeneous boundary conditions. Now, let us consider a problem with a general
non-homogeneous boundary condition as
y(x1 ) = y1
y(x2 ) = y2 . (2.10.20)
n
yn = ak φk + φ0 (x) (2.10.21)
k=1
where φ0 (x) satisfies the given nonhomogeneous boundary conditions. Since the
known functions φ j (x) satisfy the homogeneous boundary conditions
φ0 (x1 ) = y1
φ0 (x2 ) = y2 . (2.10.23)
y + y + x = 0 (2.10.25)
It is required to find an approximate solution of the equation using the Ritz method.
Solution: The exact solution of the above differential equation, using the classical
method for the solution of a differential equation with constant coefficients, is
sin x
y= − x. (2.10.27)
sin 1
Now, the approximate solution of Eq. (2.10.25) is found and compared with
Eq. (2.10.27).
The corresponding expression for the functional of Eq. (2.10.25) is
1
I = (y 2 + y 2 − 2x y)d x. (2.10.28)
0
Comparing Eq. (2.10.28) with Eq. (2.10.3) reveals that p = 1, q = −1, and f = −x.
The solution is approximated with one term of the series Eq. (2.10.5) as
Substituting the approximate solution Eq. (2.10.29) in the expression for the func-
tional, using Eq. (2.10.19), gives
1 1
L(y1 )φ1 d x = [−2a1 + a1 x(1 − x) + x]x(1 − x)d x = 0.
0 0
a1 5 1 + 2a1 4 1 + 3a1 3
[ x − x + x − a1 x 2 ]10 = 0
5 4 3
a1 1 + 2a1 1 + 3a1
− + − a1 = 0
5 4 3
5
a1 =
18
and thus, the solution is
5
y1 = x(1 − x) (2.10.30)
18
Example 2 Consider again the same problem as in Example (1), but with an approx-
imate solution with two terms of the series being considered as
φ1 = x(1 − x) , φ2 = x 2 (1 − x)
and
y2 = x(1 − x)(a1 + a2 x).
and 1
L(y2 )φ2 d x = 0. (2.10.32)
0
or
1
[2(a2 − a1 ) − (6a2 − 1 − a1 )x + (a2 − a1 )x 2 − a2 x 3 ](x − x 2 )d x = 0
0
1
[2(a2 − a1 ) + (a1 + 1 − 6a2 )x + (a2 − a1 )x 2 − a2 x 3 ](x 2 − x 3 )d x = 0.
0
18a1 + 9a2 − 5 = 0
3 13 1
a1 + a2 − = 0.
20 105 20
Solving for a1 and a2 gives
71 7
a1 = , a2 =
369 41
or
71 7
y2 = x(1 − x)( + x). (2.10.33)
369 41
Now, the exact solution of the differential equation Eq. (2.10.25) is compared
with the one-term and two-term approximate solutions. The exact solution from
Eq. (2.10.27) is called y, the one-term approximate solution from Eq. (2.10.30) is
called y1 , and the two-term approximate solution from Eq. (2.10.33) is called y2 .
All three solutions satisfy the given boundary conditions at x = 0 and x = 1. To
compare the three solutions, their values at x = 1/4, x = 1/2, and x = 3/4 are
calculated and shown in Table 2.1.
Comparing the results, it is seen that the error of the first approximation is about
%15 and that of the second approximation about %1.
Example 3 Consider the Bessel differential equation
x 2 y + x y + (x 2 − 1)y = 0 (2.10.34)
The exact solution of the assumed Bessel differential equation under the given bound-
ary conditions is
y = 3.6072I1 (x) + 0.75195Y1 (x) (2.10.36)
where I1 and Y1 are the modified Bessel functions of the first and second types and
of order 1.
Now, the solution of Eq. (2.10.34) may be approximately obtained using the
Galerkin method. Let us change the dependent function y to z by the transformation
32 2 Mathematical Foundations
x2 − 1
xz + z + z + x 2 = 0. (2.10.37)
x
and the approximate solution of Eq. (2.10.34) with one-term approximation for y1
becomes
y1 = 0.8110(x − 1)(2 − x) + x (2.10.41)
The exact solution Eq. (2.10.36) is compared with the one-term approximate solution
Eq. (2.10.41) in the following at three different locations (Table 2.2).
It should be noted that a very close agreement is reached with even the one-term
approximation.
2.11 Problems
1. When the equilibrium problems Eq. (2.8.1) and Eq. (2.8.2) are linear, the weighted-
residual methods to the trial family Eq. (2.8.4) all lead to equations for the C j
having the following form:
2.11 Problems 33
⎡ ⎤⎡C ⎤ ⎡b ⎤
a11 a12 .... a1r 1 1
⎢ a21 ⎥ ⎢ C2 ⎥ ⎢ b2 ⎥
⎢ a22 .... a2r ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥=⎢ ⎥
⎣ .. .. .. .. ⎦ ⎣ ... ⎦ ⎣ ... ⎦
ar 1 ar 2 .... arr Cr br
where the Pk are the r locations arbitrarily selected. Show that for the subdomain
method
ak j = L 2m [φ j ]d D bk = { f − L 2m [φ0 ]}d D
Dk Dk
where the Dk are the r selected subdomains. Show that for the Galerkin method
ak j = φk L 2m [φ j ]d D bk = φk ( f − L 2m [φ0 ])d D
D D
Note that in every case the matrix A has to do with the characteristics of the
system and that the matrix B is related to the loading in the domain and acting
on the boundary.
2. Show that the equation applying to the unknown value of an approximate solution
to Poisson’s equation at a nodal point is the same by either the finite element or
finite difference method (solve this problem after the introduction to the finite
element method).
3. Employing the Galerkin method, solve the following differential equation:
y + (Ax + B)y = C
y(x1 ) = 0
y(x2 ) = 0
where u x and u y are the first partial derivatives of the function u, and u x x , u yy ,
and u x y are the second partial derivatives with respect to x and y. We assume the
functional F to be at least differentiable up to the third order, and the extremizing
function u(x, y) differentiable up to the second order. We further assume that
the class of admissible function u(x, y) has the following properties;
a-u(x, y) is prescribed on boundary curve C.
b-u(x, y) and it’s partial derivatives with respect to x and y up to the second
order are continuous in the domain D.
Using the method of calculus of variations, obtain the associated Euler equation
and the natural boundary conditions.
Further Readings
1. Kantrovich LV, Krylov VI (1964) Approximate methods for higher analysis. P. Noordhoff,
Holland
2. Langhaar HL (1962) Energy methods on applied mechanics. Wiley, New York
3. Elsgolts L (1973) Differential equations and the calculus of variations. Mir Publisher, Moscow
http://www.springer.com/978-3-319-08036-9