Numerical Methods in Computational Engineering
Numerical Methods in Computational Engineering
NUMERICAL METHODS
In Computational Engineering
G.V. MILOVANOVIC, sci. advisor
&
D. R. DORDEVIC, lecturer
financed by ·
Austrian
~ Development Cooperation
Nis, 2001
UNIVERSITY of NIS
FACULTY OF CIVIL ENGINEERING AND ARCHITECTURE
NUMERICAL METHODS
in Computational Engineering
G.V. MILOVANOVIC, sci. advisor
(http:/ /gauss.elfak.ni.ac.yu/)
D. R. DORDEVIC, lecturer
(http:/ jwww.gaf.ni.ac.yu/cdp/lecturer.htm)
Nis, 2007.
Prof. Gradimir V. ?IIilm·;moYit. lTni\'l'rsitY ~is
Prof. Don1e R. Don1PYi(, Unin'rsitY ~is ·
Numerical Methods in Computational Engineering
ISBN 978 86-80295-81-7
The publishing of this script is part of the project CDP+ 53/2006 financed by Austrian
Cooperation through WUS Austria
This copy is not for sale
Ova skripta je objavljena u okviru projekta WUS Austria CDP+ 53/2006 finansiranog
od strane Austrian Cooperation
Besplatan primerak
Preface . IX
v
BibliogTaphy . 62
Assignment-IV (http:llwww.gaf.ni.ac.yuiCDPIAssignment-IV.pdf)
5. Nonlinear Equations and Systems of Equations 65
5 .1. N onlineai' equations 65
5.1.0. Introduction 65
5.1.1. Newton's method 68
5.1.2. Bisection method 72
5 .1.3. Program reali,.-;ation 73
5.2. System of nonlinear equations 83
5.2.1. Newton-Kantorowitch (R.aphson) method 83
5.2.2. Gradient method 87
5.2.3. Globally convergent methods 91
Bibliography . 94
Assignment-V (http;/ lwww. gaf. ni. ac. yuiCDP I Assignment-V. pdf)
6. Approximation and Interpolation 97
6.1. Introduction 97
6.2. Chebyshev systems 98
6.3. Lagrange's interpolation 99
6.4. Newton's interpolation with divided differences 100
6.5. Newton's interpolation formulas 102
6.6. Spline functions and interpolation by splines 104
6.7. Prony's interpolation 106
6.8. Packages for interpolation of functions 107
Bibliography . 108
Assignment-VI (http: I lwww. gaf .ni. ac. yuiCDP I Assignment-VI. pdf)
7. Best Approximation of Functions 109
7.1. Introduction 109
7.2. Best L 2 approxiination 112
7.3. Best I"· approximation 114
7.4. Packages for approximation of fnnctions 119
Bibliography . 119
Assignment-VII . (http: I lwww. gaf. ni. ac. yuiCDPI Assignment-VII. pdf)
8. Numerical Differentiation and Integration 121
8.1. Numerical differentiation 121
8.1.1. Introduction 121
8.1.2. Formulas for numerical differentiation 121
8.2. Numerical integration- Quadrature formulas 12.3
8.2.1. Introduction 123
8.2.2. Newton-Cotes formulas 124
8.2.3. Generalized qHadra.tHne formulas 126
8.2.4. R.ombl~rg integration 128
8.2.5. Program n~alization 128
8.2.6. On numerical evaluation of a class of double integrals 132
8.2.7. Packages for nmnerical integration 134
Bil>liography . 13.5
Assignnwnt-VIII (http: I lwww. gaf. ni. ac. yuiCDP I Assignment-VIII. pdf)
9. Ordinary Differential Equations - ODE 137
9.1. Introduction 1:3 7
vi
0.2. Euler's method 1.38
0.3. Ge1wr;d liiwm multi-st.<•p uwthod 1:30
0.4. Clwin~ of initial valuPs 141
0.:). Predict.or-corrP.ct.or llH'.tlwcls 1-.11
0.6. Program n·alizat.ion of Imdti-st.<~p methods 142
0.7. RungP-Kut.t.a lll<-'t.lwds 14.)
0.8. Progrmu realization of Rnng<'-Kut.ta methods 1;j()
0.0. Solution of s.vst.<~lll of <-'quat.ious and equations of higher order l;j:j
0.10. Bmmdary prohbus 158
0.11. Packag<!S for ODEs Hil
Bibliography . 161
Assignment-IX (http://www.gaf.ni.ac.yu/CDP/Assignment-IX.pdf)
10. Partial Differential Equations - PDE 1Ci3
10.1. Introduction 163
10.2. Grid method . 164
10.3. Laplace eqnation 165
10.4. Vlave eqnat.ion 167
10.5. Packages for PDE.~ 160
Bibliography 170
Assignment-X (http://www.gaf.ni.ac.yu/CDP/Assignment-X.pdf)
11. Integral Equations . 173
11.1. Introclnction 173
11.2. Mc!t.lwcl of sucu~ssive approximations 17:)
11.3. Application of q1wdra.t.me formnlas 175
11.4. Program n~alization 176
Bibliography 178
Assignnwnt-XI (http://www.gaf.ni.ac.yu/CDP/Assignment-XI.pdf)
Appendices
A.l. Equations of Technical Physics (http://www.gaf.ni.ac.yu/CDP/ETPH.pdf)
A.2. Special Functions (http://www.gaf.ni.ac.yu/CDP/SPEC.pdf)
A.3. Numerical Methods in FEM (http://www.gaf.ni.ac.yu/CDP/NMFEM.pdf)
A.4. Numerical Methods in Informatics (http://www.gaf.ni.ac.yu/CDP/NMINF.pdf)
vii
Preface
Authors
lX
Faculty of Civil Enginc(~ring Faculty of Civil Engineering and Architectme
Belgrade Nis
Master Study Doctoral Study
COMPUTATIONAL ENGINEERING
LECTURES
LESSON I
1.1 Calculus
The principal topics in calculus are the real and complex number systems, the
concept of limits ancl convergence, and the properties of functions.
Convergence of a sequence of numbers :ri is defined as follows:
The sequence :r.;, converges to the limit :c* if7 given any tolerance E > 0,
there is an index N = N(E) so that for all ·i 2:: N we have Ixi- :r* I ::=;E. The
notation for this is
lim x.;, = x*.
i--too
holds for all :r* and all ways for the xi to converge to x*. We list six theorems from
calculus which are useful for estimating values that appear in numerical computation.
Theorem 1 (Mean value theorem for continuous functions). Let .f(x:) be con-
tinuous on the interval [a, b]. Consider points XHI and XLOW in [a, b] and a value y so
that .f(X LOW) ::=; y ::=; .f(X HI). Then there is a point pin [a, b] so that
.f(p) = y.
Theorem 2 (Mean value theorem for sums). Let .f(x) be continuous on the inter-
val [a, b], let x 1 , :r 2 , ... , :rn be points in [a, b] and let w1 ,w 2 , ... , 'Wn be positive numbers.
Then there is a point p in [a, b] so that
n n
Theorem 3 (Mean value theorem for integrals). Let f (:c) be continuous on the
interval [a, b] and let w(x) be a nonnegative function [w(x) 2:: 0] on [a, b]. Then there is
. a point pin [a, b] so that
·b (b
!
. o. w(:r)f(:r,)dx = f(p) .la. w(x)dx,
1
2 Numerical Methotls in Computational Engineering
Theorems 2 and 3 show the analogy that exists between sums and integrals. This
fact dt~rives
from the definition of th<-~ integral as
where the points :r:'i with :1:.i. < :r:i+l are a partition of [a, bj. This analogy shows up
for many nmnerical methods when~ one variation applies to sums and another applies
to int<-~grals. Tht~orem 2 is prow~cl from Tlwon~m 1, and then Theorem 3 is proved
by <1. similar method. The assumption that 'w (:r:) ~ 0 (w.;. > 0) may be i·eplaced by
w(:r) :::; 0 (w.i. < 0) in tlws<~ tlworems; it is essential that w(:r:) be on one sign shown hy
the exalllplc w(:1:) = .f(:1:) = :1: and [(}.. b] = [-1, 1].
Theorem 4 (Continuous functions assume max/min values). Let .f(:1:) !Je coll-
~imzous on tlw i~1tenral [~~,.b] with Ia I, lbl :::; oo. Tl1e~11 there are points XHI n.ncl XLOVV
lll [a, b] so that for all :1: m [a, b]
As an illustration of the dift(~rew:c l>etvvef~ll tlwmy and pra.ct;ice, the CJIUI.lltity [.f (:1: +
h) - .f(:1:)]/h uw be l'(~placed hy f[(:r +h) - f(:~:- h)]/(2h) with 110 cha.np,e in the
t;heory lm t wi tlr dn!.lll<l Uc iluprove11W11t in the nlte of ccmvergeuce; that is, mw:h nw1·e
accurate (-~.'-ltimat;c.'-1 of .f'(:1:) are o!Jt;ailled f(H· a given W!.lue of h. The /;;-th deriva-
t;iw~ is Uw derivative of Uw (k -1)th derivative: they arc dcuoted h_y d1"f/rl:r 1" m
f" (:r), f"' (:r: ), .f( 4 ) (:r:). f(S) (:1:) ....
Theorem 5 (Mean value theorem for derivatives). Let; .f(:r:) he continuous ;:md
difi(~rcntial>le i11 [a, 1!], with lal, lhl < oo. Then there is a point fJ in [u.,b] so that
Theorem 6 (Tailor series ' with remainder). Let I (:1:) haFc n + 1 c:cmti111wlls
rleri,rat:ivi~S in [u.. li].
·(:1:) = /'(r:) + /"(r·)(:l:- r:) + /"'(r:) (:1:- rV + (" (:1:- rV + ... + f(n)(c) (:r- r:)n
·/ · · · 2! · 3! n!
L<~sson I - Mathematics and Computer Science
wlwre Rn+l has <~ith<~r mw of tlw following forms (pis a point betweeu :t: awl c):
- .t•(n+l) ((! ) -
R·n+l (.1.' ·) - (:I: - c)n+l
----
(n + 1)!
Rn+l (:r:) = A
n. .
~·:r (:r:- t)"'.f(n+l) (t)dt
c
If a function f d<~peuds on sPveral variables, one can differentiate it with respect to one
:variable, say :r:, while ke<~pinp; all t.hc rest fixed. This is a partial derivative off and it
1s <lenoted by of/ih: or fr:· Higlwr mder and mixed derivatives are defined by successive
difff~rent.iation. Taylor's S(~l'i!~S for functions of several variables is a direct extension of
the formula in Theorem G, althonp;h the number of terms in it grows rapidly. For two
variables it is ·
where the a£ are real or C:Olll]Jlex wzmhers and O.n =/= 0. Then, there is a complex lWJnbeT
p so that p(p) = 0.
g·ood enough. But, for mc-my other applications, G4 bit arithmetic is required. Hig·her
precision (i.e. 64 bit, or even 128 bit) can he reached by software means, using Double
precision or Quad precision, resp<~ctively. Of course, such software enhancement must
be payed by even 10 times execution times of single precision calculation.
As already told, computers store numbers not with infinite precision but rather in
some approximation that can be packed into a fixed number of bits (binary digits) or
bytes (groups of 8 bits). Almost all computers allow the programmer a tho ice amm;tg·
several different such representations or data types. Data types can differ in~ the number
of bits utilized (the word-length), but also in the more fundamental respect of whether
the stored number is represented in fixed- point (also called integer) or fioating·- point
(also called real) format. A number in integer representation is exact. . Arithmetic
between numbers in int<-~ger n~prcsentation is also exact, with the provisos that
(a) the answer is not ontsi<h~ tlw range of (usually signed) integers that can be n~pn~
sented, all<l
(b) division i::; interpn~t<~<l a::; producing an integer result, throwing a.way any integer
remainder.
,.-A-.,,
=0
lf2 1000 000 0 1000000 000 00000 000000 00 (a)
3=0 10 00 00 10 1 100000000 00000 000000 00 (b)
I-~ = 0 0 11 1111 1 10000 00000 0000 000000000 (c)
() OI101001 l 10 10 I I 0 I 0 I I 1 11 I 100 I 0 I 0
---c=··. (d)
= 0 I 0 00 00 10 o o o o o o o o o o o o o o o o o o o o o o ol (e)
3+ I0- 7 =0 1 0 0 0 0 0 I 0 .1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 (f)
Figure 1. 2 .1.
In Fig. 1.2.1. an~ giv<~u Hoatiug point rcpn~::;entations of nmnhcrs in a typical 32-bit
(4-hyte) format., with the following <~xmnples:
(<t) Tlw mnuber 1/2 (not<~ the bias in the <~xponcnt);
(b) the mnnbcr :3;
(c) tlw mnulwr 1/4;
(d) the nmuher 1 o- 7 , r<~pn~s<~ll tt~d to machine accuracy;
(P) tlw sanw lllllllh<~r 10- 7. lmt shifted so as to h;.w<~ the same expom~nt as the number
3: with this shifliug. ·' :-;ignificaucc is lost ancl 10- 7 becomes zero; shifting to a
connnon exponeut 1111. 'l('<:ur before two nmnh<~rs can be added;
(f) smn of tlw numbers ;~ -i w- 7 , which c~qnals :3 to nmchiw~ accuracy. Even though
10- 7 can he n~pres<~Ut(~<l au:uratdy by it::;elf. it cannot accurately he add<~d to a
much larger Hmnb~~r.
In fioating-point repn~sentatiou, a rnunl ><~r is represented internally by a sig1~ ~ Ji t
s (int(~rprd.<~d as plns or mim1s), au exact iut<~gcr <-~xponent e, awl an exact pos1t.1v<~
iuteg<~r manl.is::;a J\1. T<lk<-~ll togct.ltcr tlw::;c n~prcsent tlw uuml><~r
Bit pat.t.e~rus that m'(~ ··as ldt-shift.e~d as t.lwy can he~" are t.fTnH~d uonwtlizc~d. Most
comput.e~1·s always prodnce~ um·ul;tlized resnlt.s. since these do not waste any hit.s of t.he~
mantissa ancl thus allow a 1-';r<~ater accuracy of the representation. Since tJ;e-~ hip;l1-on leT
bit. of a properly nonuali:wd mautissa (wlwu B = 2) is always one, souw compnte~rs
do not ston~ this hit at all. giving one extra hit of significaw:e. Arithme~tic au1ong
muuhcrs iu floating-point n~presc~ntatiou is not exact., even if the operawls happ<~n t.o
he~ <~xa<:tly rcpn~se~ntcd (i.e~.. have~ <~xact valw~s in the form of e~quation ( 1. 2.1). For
<~xamph\ two fioatiug nmuh<~rs an~ added by first right-shifting (divicliug by two) tlw
m;u1t.issa of the small<~r (in 111agui tw le) ollc, sinmltaneously iw:reasillg i t.s c~xpmwuL
nntil the two operands hav<~ tlw smuc c~xpmwnt. Low-order (least significant) hits of
the smaller opcc~nmd an~ lost by this shifting. If the two operamls differ too greatly in
1wtgnitude, tlwu t.lw smaller opc~mwl is efte-~<:t.ivdy replaced by zc~ro, siw:c~ it is rigllt-
shiftecl to oblivion. Tlw sm;\.llc~st (iu nutgnitndc) floating-point nmnber which. wlwll
ad< lc-~d to the fioating- poi11 t. nmnl HT 1. 0, prod nu~s a floating-poillt result eliffc~rcnt from
1.0 is tenned the lmtc:hinc ac:cmac:y ·m. A typical compntc~r with B = 2 ancl a 32-hit
word-length has '///. aronwl ;) x 10- 8 . Gc~n<~rally speaking, th<~ machine accuracy ·1n is
the fractional au:nrac:y to w hic:h fj oating-point munbers are n~rn·csentc~cl, conespou< ling
to a dmng<~ of one iu the l<~ast significant bit of the mantissa.
(1.3.1)'
2a
wlwn o.c < < [J 2 the addition lH~c:omcs critical and ronnel-off could ruin the~ calcula-
tion (seE-~ .s<~c:tion 1. (j.). .
. Roundoff enor is a charactc~ristic ofcomputcr hardware. There is another, different,
kind bf error that is a characteristic of the program ot algorithm used, in<lepenclent of
the liartlware on which the program is executed. Many numerical algorithms compute
,. discrete" approximatiolls to som<~ desired "continuous" quantity. For example~, an
integral is evaluated 1mnwrically by computing a function at. a discrete set of points,
rather than at ,. every"· point. Or, a fnnction may be evaluated by smmning; a finite
Numerical Methods in Computational Engineering
number of leading terms in its infinite series, rather than all infinity terms. In cases like
this, there is an ad_iustable parameter, e.g., tlw number of points or of terms, such that
the "true" answer is obtained only when that parameter goes to infinity. Any practical
calculation is done with a finite, hut sufficiently large, choice of that parameter. .
The discrepancy between the true answer and the answer obtained in a practical
calculation is called tlw tnmcation error. Truncation error would persist even on a
hypothetical, "perfect" computer that had an infinitely accurate represe1itation and no
roundoff error. As a general rule there is not much that a programmer c~1n do about
roundoff (~nm·, othe1· than to choose algorithms that do not magnify it umwcessarily.
Truncation (~nor, on tlw otlwr hand, is entirely under the programmers control. In fact,
it is only a slight exaggeration to say that clever minimization of truncation error is
practically the entire content of the field of numerical analysis.
Most of the time, truncation error and roundoff error do not strongly inter~tct with
one another. A calculation can be imagined as having, first, the truncation error that it
would have if nm ou an infinite-precision computer, and in addition, the roundofF enor
associated with the umnb(T of op<~rations IH~rfonned.
Some comrmtations arc very s<~nsitive to ronnel-oft' and others are not. In some
problems sensitivity to rouwl-off can he eliminated by changing the formula or method.
This is always possible; th(~rc arc many prohl(-mlS which arc inherently sensitive to
round-off and any other 1mu~rtaiuties. Thus we must distinguish between sensitivity of
methods and sensitivity inherent inproblems.
The word stability app<~ars (lming numerical computations and refers to continu-
ous dep(~uclence of a solution on the data of the problem or rnethod. If one says that
a lll(~thod is n'umerically 'I.Lnstable, one nwans that the round-off effects are grossly
magnified by the nwthocl. Stability also has precist~ technical meaning (not always the
sanw) in <liff(-~rent an~as as well as in this g(meral one. ·
Solving diff(~rcntial <~qnations n::;nally lc-~ads to clifh~rence equations, like
Here, the sequence :1: 1 . :1::2, . .. is defined, alHl for given initial conditions :r1 and :r:2 of
clifferential equation, we~ gd the initial conditions for difference equation. For example,
:r 1 = :30 . :r: 2 = 25. Cmupnting in snccc~ssion for 4, 8, 16, 32,64 decimal cligits gives
th<~ n~snlts that can h<~ compared with the exact one, :r:.;. = 3G/(5/6)'i. (Compute: in
Mathematica, using N[:1:[I + 2], !.:]. where k = 4, !::5, 1G, 32,64 nmnbcr of decimal digits).
1 4 8 1G True value
1 :30.00 30.00 30.00 30.00
2 25.00 25.00 25.00 25.00
3 20.!::5:3 20.8333 20.8:33:3 20.!::5:B3
4 17.~)() 17.:3Gll 17.3611 17.:3Gll
5 14.4() 14.,!li7G 14.4G76 1~1.4li7G
(j l2.07 12. 05G:J 12.0563 12. OSG:)
7 10.00 10.0470 10.04G9 10.04G9
8 o.G1!::5 8.3724 8.3724 8.3724
9 ().541 G.9773 G.9770 G. 9770
10 7.121 5.!::51:33 5.8142 5.8142
11 .925 4.!::5,!7!::5 4.8452 4.84G2
12 1G. 790 4.0290 4.037G 4.0:HG
1:) -:31.!)20 3.3888 3.3G47 3.3G"17
H 108.700 2.7:n8 2.8039 2.80:39
1li !)5'l. GOO 1.2!)78 1.9472 1.9472
1!::5 8G7G.OOO -4.4918 1.3522 1.:)522
20 77170.000 -51.6565 .9390 .9390
22 G.~) X lOS '--472.7080 .G521 .GS21
2S -l.t\ X 10 7 12701.1000 .:377G .3774
20 S.O X 10 8 -:5<15079.0000 .2U4 .2lo"J
:)() .J.fi X 10 9 -~·L 1 X lOG .1071 .1S17
Lesson I - l\llathcm<ttic:s and Compntf~r Scit~nc:e 7
:r:f' (:r:)
C;v - - - - ' -
, f(:I:) .
This number et>timates how mnch an nncertainty in the data :r of a problem is mag-
nifi<-~d in its t>olnt.ion f (:r:). If this mnnlwr is large, then the problem is said to be
'ill-conditioned m pomly conditioned.
The given fmmula is for t.lw simph~st case ofa function of a single variable; it is not
<~asy to obta.in such formulas for more complex problems that depend on many variables
of different types. We can se<~ three different ways that a problem can have a large
condition number:
1. f'(:r) ·may br. laT_(]f: while :r: and .f(:r) are not;
If we evaluate 1 + Jl:r:- 11 for :r very close to 1, then .T and f(:r) an~ nearly 1, but
.f'(:c) is large and the computed value is highly sensitive to change in :r.
2. f(:r) ·may be snuLll wh·ile :r: and f'(:r:) arc nut;
The Taylor's series for sin :r m~ar 7r or e-'c . with .7: larg<~ exhibit this form of ill
concli tioning.
3. ;r n1.ay be la:r.qe while f'(:r) and .f(;r) aTe not;
The evaluettion of sin :r: for :r 'near 10000007r is poorly conditioned.
One can alsci say that computation is ill-coiiclitioned and this is the same as saying
it is mm1erically m~stahle. Th<~ condition number gives more information than just.
saying somet.hirig is nmn<~rically m1stable. It is rarely possible to obtain accmate values
fo~· condition numbers but cnw rarely neecls much accuracy; an order of magnitucl<-~ is
often enough to know.
Note that is almost impossible for a method to be numerically stable for an ill-
conditioned problem.
Example 1.3.1. · Au ill-conditi~ned line intersection problem consists in
computing the point of iilt<-~rscc:tion P of two nearly parallel lines. It it> cle~r t~mt. a
minor change in one line changes the point of intersection to (P + 8P) wh1ch rs far
hom P. A mathematical model of this problem is obtained by introducing a coordinate
system and writing <~quations
y = a 1 :r + b1
Y = a2:r: + b2
8 N unwrical Methods in Computational Engineering
(J,l:J: - y = -bl
a 2 :r: - y = -b 2
with the a1 and a2 lWctrly equal since the lines are nearly parallel. This rmme'rical
problem is 1mstablf~ or ill-cowlitiolHxl, as it reflects the ill-conditioning of the original
problem. ·
A mathematical model is obtained by introducing a coordinate system. Any two
vectors will do for a lmHis, and if we chm;e to use the unusual basis
bl = (0.5703958095, 0.8213701274)
b2 = (0.5703955766, 0.8213701274)
so that the equations of the two lines in thiH coordinate system are
y = -0.0000000513 + 0.9.999998843.'r:
y = -0.0000045753 + 1.00000159G:r:
with tlw point of intersectioi1 P with coordinates
( -0.8903429339, 0.8903427796). Note that mathematical model is very ill-conditioned:
a change of 0.0000017117 in tlw data makes the two lines parallel, with no solution.
The poor choice of a basis in the given example made the problem poorly concli-
timwd. In more complex prohl<~ms it is not so easy to see that a poor choice has been
macle. In fact, a poor choiu~ is sometimes the most natural thing to do. For example, in
problems involving the polynomials, mw naturally takes vectors based on 1, :r:, :r 2 , ... , :;:n
as a baHis, bnt thes<~ an~ terribly ill-conditioned even for n moderate in size.
Example 1.3.2. Syst<~m of equations (input information)
2:r: + Gy = 8
2:r: + G.0001y = 8.00()1
have a solutions (ontpnt infonnation) :r: = 1, y = 1. If th<~ coefficients of second equation
slightly change, i.<~. if one takes the <~qnation
L f(:I:.i)!J.:r:.,:.
·i=l
:·mu:c~ssivdy magnifif~d nut.il it co11ws to swamp the true answer. An unstable~ method
would he nsdul on a hypotlwtical, perfect computer; but in this imperfr~ct world it is
lH~ccssary for us to n~quin~ that algorithms be stable or if unf:>table that we use them
with great caution. Here is a simple~, if somewhat artificial, example of an unstahh~
algorithm (see [4], p.20).
Example 1.3.4. Suppose that it is d<~sired to calculate all integer pow<~n; of the
so-call eel "Golden Mean,'' tlw munht~r given by
(1.:3.3)
Well, knowing the first two values <I> 0 = 1 and <I> 1 = 0.61803398, we can apply (1.3.3)
by subtraction, rather than a slow<~r multiplication by <I>, at each stage. Unfortunat<~ly,
the recurrence (1.3.3) also has another solution, namely the value - ~ ( VG + 1). Since
the recunenc:e is linear, and since this undesired solution has magnitude greater than
unity, any small admixture of it iHtroduced by roundoff errors will grow exponentially.
On a typical machine with 32-hit word-length, (1.3.3) starts to give completely wrong
answers by about n = 16, at which point <I>n is down to only 10- 4 . Thus, the recurrence
(1.3.3) is unstabl<\ and cannot be used for the purpose stated.
On the end of this s<~ction, it remains the questjon: How to estimate errors and
uncertainty '?
One almost uewer knows the error in a computed result unless one already knows
the true solution, and so ouc must settle for estimates of the error. There are three
basic approaches to error estimates. The first is forward error analysis, when one
u:-;es the theory of tlw mmHTical mdlwd plus information about the uncertainty in the
problem and attempts to predict the error in the computed result. The information one
might use includes
- the size of ronnel-off,
- the measurement enors in problem data,
- the truncation enors in obtaining the nunwrical model from the mathematical
model,
- the differences betweeu the nmthematic:al model and the original physical model.
The secoud approach is backward error analysis, where one takes a computed
solution and sees how close it comes to solving the original problem . The backward error
is often ~alled the the residual in equations. This approach requires that the problems
involve satisfying some couditions (such as an equation) which can be ~ested with a
trial solution. This prevents it hom being applicable to all numerical computations,
e.g. nmnerically estimating the value of 1r or the value of an integral.
. The third approach is experimental error analysis, where one experiments
with changing the computations, the method, or the data to see the effect they have on
the results. If one truly wants certainty about the accuracy of a computed value, then
one should give the problem to two (or even more) different groups and ask to solve
it. The groups arc not allowed to talk together, preventing a wrong idea from being
passing around.
The relationship between these. three approaches could be illustrated graphically,
10 Numerical Methods in Computational Engineering
_)+
Baci<w2~
.
P~-
. ~~~-t_--·------
.---------..
- CornPut .
. eo. .
·
--------~.x
.
+
.·
~~
. .
.
+
Forw3rd
Error
Error . ./ +
\
a ·-- --------- y
Exact
Figure 1.3.1
1.4. Programming
There are several a.reas of knowledge ahont programming that are 1weded for sci-
entif-ic computation. These induclf~ kuowledge about:
Debugging programs is au art as well a.s a science, and it must be learned through
practice. There are sev(~ral dfectivc tactics to use, like:
- Intermediate output
- Consultations about program with experienced user
- Use compiler and d<~lmgging tools.
Smm~ hints:
- Use lots of cmm1wnts
- Usc meaniugfnlwmws for variables
- Make t.lw typ<~s of variables obvious
- Use simph-~ logical coutrol stnH:tnres
- Usc program p<u:kag<~s and systems (Mathematic:a, Matlab) wherever possible
- Ust strnctnn~d programming
- Use (if possible) OOP t.<~dmics for technical problems.
1. 5. Numerical software
- Applied Statistics
- BIT
- The~ Compnte~r .Tonrnal
- N m1wrische Matlwmatik
The ACM Algorithms se~rics contains more than thousand items and is available~ as
the Collected Algorithms of the Association for Computing Machinery.
Three general libraries of programs for numerical computations are widely available:
IMSL Inkmational Mathematical Scientific Library
NAG Nnnwrical Algorithms Group, Oxford University
SSP Scie~ntific Snbroutiw~ Package, IBM Corporation
Then~ are a substantial nmnher of important, specialized software packages. Most
of the packages listed hdow an~ available from IMSL, Inc.
a:r 2 + b:r + c = 0
with 5, 10, 1G, ... 100 decimal digits ~1sing FORTRAN and Mathematica code. Take a=
1, c = 2, b = G.2123(10)10G.2123. Use the following two codes:
DIS=SQRT(B*B-4. *A *C) DIS=SQRT(B*B-4. *A *C)
Xl=(-B+DIS)/(2*A) IF(B.LT-.0) THEN
X2=(-B-DIS)/(2*A) , X1=(-B+DIS)/(2*A)
ELSE
Xl=( -B-DIS) I (2* A)
END IF
X2=C/X1
Compare the obtained results.
There are two impmt;-mt lessons to be learned from example 1.6.1.:
1. Round~off error can completely ruin a short, simple computation.
2. A simple change in the method might eliminate adverse round-off
effects.
Example 1.6.2. Calculation of 1r.
Using five following algorithms, calculate 1r in order to illustrate the various effects
of round-ofF on somewhat different computations.
12 Numerical Methods in Computational Engineering
2
(0.5) 3(0.5) 4 3_ X 5(0.5) 6
r:
7r = G(() .o + .
--. +1X + 1 X
+ ... )
2x3 2x4x5 2x4x6x7
- . 1137
' ' c
3.1409 ... - 3 < 7r < 3 1335 -- 3.1428
- )
... )
8069 9347
[8] Stoer, J., ancl. 'Bulirsch, R., Introduction to Numerical Analysis. Springer-
Verlag, New York, 1980.
[9] Kahaner, D., Moler, C., and Nash, S., Numerical Methods and Software.,
Prentice Hall, Englewood Cliffs, 1989.
[10] Johnson, L.W., and Riess, R.D., Numerical Analysis. 2nd ed., Addison- Wesley,
Reading, 1982.
[11] Wilkinson, .J.H., Rounding Errors in Algebraic Processes. Prentice-Hall,
Englewood Cliffs, 1964.
[12] Milovanovic, G.V. aml Kovacevic, M.A., Zbirka resenih zadataka iz nu-
mer·icke analize. Nauc:na kn.iiga, Beograd, 1985. (Serbian).
[13] Forsythe, G.E., Malcolm, M.A., and Moler, C.B., Computer Methods for
Mathematical Computations. Englewood Cliffs, Prentice-Hall, NJ, 1977.
[14] IMSL Math/Library Users Manual, IMSL Inc., 2500 City West Boulevard,
Houston TX 77042.
[15] NAG Fortran Library, Numerical Algorithms Group, 256 Banbury Road, Ox-
ford OX27DE, U.K.
Faculty of Civil Engineering Faculty of Civil Engineering and Architecture
Belgrade ', · Nis
Master Study Doctoral Study
COMPUTATIONAL ENGINEERING
LECTURES
LESSON II
an alk:
6A:= (k=1, ... ,n-1)
are different fi-om zero, the 1natrix A = [aij ln. x n can be written in form
(2.1.(1) A=LR,
Nevertheless, if diagor1alelements of matrix R (or L) take fixed values, not one being
equal to zero, the decomposition ifl unique. In regards to (2.1.1.2) and (2.1.1.3), and
having in mind
ma.or:('i,j)
a,i.i = L lik:Tkj ('i, j = 1, ... , n)
A:=l
15
16 Numerical Methods in Computational Engineering
~he elements of matrice.s L and R can be easy determined by recursive procedure, giving
rn, advance the values for elements T·i·i(=l= 0) or li·i(=/= 0) (i ____: 1, ... , n). For example, if
given numbers ·r.;,.;,(::j: 0) (i = 1, ... , n), it holds
_ an
ln- - .
Tn
. In similar way can he defined recursive procedure for determination of matrix ele-
ments of matrices Land R, if the numbers l,ii(=l= 0) ('i = 1, ... , n) are given in advance.
In practical applications one usually takes '~"·i·i = 1 ('i = 1, ... , n) or l·i·i = 1('i = 1, ... , n).
Very frequent case in application is of multi-diagonal matrices, i.e. matrices
with elements different from zero on the main diagonal and around the main diag-
onal. For example, if a.;,.i =I= 0 for li - .il : : ; 1 and O.ij = 0 for li - .il > 1 , the
matrix is tri-diagonal. The elements of such a matrix are wmally written as vectors
(a2, ... , a..n.), (bl·,: .. , bn), (cL ... , Cn-1), i.e.
h cl 0 0 0
(],2 b2 c2 0 0
(2.1.1.4) A= 0 (},3 b3 0 0
0 0 0 an bn
If a.;,.i 1- 0 (i'i, - .i I : : ; 2) and o..;,.i = 0 (I ·i - .i I > 2), we have a case of five-diagonal matrix.
Let m; now suppose that tri-diagonal matrix (2.1.1.4) fulfills the conditions of Theorem
2.1.1.1. For decomposition of snch a matrix it is enough to suppose that
fJl () 0 0 0
CY.2 (-J 2 0 0 0
L= 0 (Y.3 /h 0 0 ({31(32 ... f3n =/= 0)
0 0 () CYn fJ.n
awl
1 11 0 0 0
0 1 12 0 0
R= 0 0 1 0 0
0 () 0 0 1
· Lesson II - Linear Systems of Algebraic Equations: Direct Methods 17
!31 /31"(1 0 0 0
Ct2 Ct2'Y1 + !32 f32'Y2 0 0
LR= 0 C't3 C't3'Y2 + !33 0 0
0 0 0 an an'Yn-1 + f3n
we get the following recursive formulas for determination of elements Cti, f3i, 'Y·i:
c1
'Y1 = /31)
Ci
'Y·i = - (i = 2, ... , n - 1) ,
13i
(A- .AI)x = 6,
one can conclude that equation (2.1.2.1) has non-trivial solutions (in:£) if and only if
det(A- .AI) = 0.
2.2.1. Introduction
Numerical problems in linear algebta can be classified in several groups:
1. .Solution of system of linear algebraic equations
A.i= b,
where
. . A regular matrix, calculation of determinant of matrix A, and matrix A
mverswn;
2. Solution of arbitrary system of linear equations using least-square method;
3. Determination of eigenvalues and eigenvectors of given quadratic matrix;
4. Solution of problems in linear programming.
For solutioi1 of these problems, a number of methods is developed. They can be
sepai·ated in. two classes, as follows.
The first class contains so-called direct methods, known sometimes as exact meth-
ods. The basic characteristic of those methods is that after final number of transfor-
mations (steps) one gets the result. Presuming all operations being performed exact,
the gained result would be absolutely exact. Of course, because the performed compu-
tations are performed with rounding intermediate results, the final result is of limited
exactness.
18 Numerical Methods in Computational Engineering
The second class is made of iterative methods, obtaining the result after infinite
number of steps. As initial values for iterative methods are usually used the results
obtained by some direct method. .
Note that at solution of systems with big number of equation, used for solution of
partial differential equations, the .iterative methods are usually used.
(2.2.2.2)
where
all
;r;ll
A=
' (],21 a2n
aln l' ~
b=
rbtl
b2
. ' x-
X2
[
(],.:1.1 CLnn b:n. r:r:n
Suppose that system of equation (2.2.2.2) has an unique solution. It is very known that
solutions of system (2.2.2.1), i.e. (2.2.2.2), can be expressed using Crammer's rules
<let A.i
:r.i = ('!: = 1, 2, · · · n.),
· clet A
(2.2.2.3) Rx=c,
when~
R=
'I'll
Tin
1'2n I
r Tnn '
L<~ssou II - Liuem Systems of Algebraic Equations: Direct Methods 1!)
System (2.2.2.3) is- solv<~<l sw:ccssivdy starting fmm the last equation. Namely,
"/1.
1
·1· - -.. -
···i-
I ii
(r·"i.- L r.;,,,.r.h) ('i=n-1, ... ,1).
l.:=·i.+l
Note that coeffici<~uts .,.,, # 0, lH~canse of assumption that systt~m (2.2.2.2), i.e. (2.2.2.3)
has an unique solution.
We will show uow how system (2.2.2.1) can he reduced to equivalent system with
triangular matrix.
Supposing an # 0, ld us compute first the factors
(J,i.l
'/nil =- ('i = 2, ... 'n),
au
and then, by multiplication first equation in system (2.2.2.1) by m. and subtracting hom
·i-th equation, one gds the system of n - 1 equations
(2.2.2.4)
where
o..~.~) = b.~ ) =b.;,-
2
aij- ·m.;,1o.1.i, ·m.ilbl (i,j = 2, ... ,n).
Assuming o. 22 -::f. 0, and applying the same procedure to (2.2.2.4), with
a..;.2
Tn.;,2 =- ('i = 3, ... , n),
0.22
where
(3) - (2) ' . (2) l (3) - ,. (2) - ' .. b(2) (. . 3 )
a..;:,; - a.ij - 111..,.2o.2j ' J.;, - J.;. - rn.1.2 2 ?., J = '· · · 'n ·
From the' obtained systems, taking the first equations, one gets the system of equations
. (1),. (1),. (1) ... (1) - b(l)
a.ll .r.1 + (1,12 .1.2 + al3 X3 + + a.ln Xn - 1
(2) '1.. + rl (2) 'I: +
(1•22 + '"'2n
n (2) X - b(2)
· ·2 · '"'23 · ·3 · · · · ·n - 2
(3) (3) - l (3)
0.33 X3 + · · · + 0.3n Xn - J3
20 Numerical Methods in Computational Engineering
1
where we put ag) = a.i..i, b.) l = b.i..
The presented triangular reduction, or as often called Gauss elimination, is actually
cleterrnirmtion of coefficients . ·
(A:)
o,·i.k
(1.:)'
aH:
(A:+l) - (1.:) ' . (1.:)
ai.i - 0'-i.,j - m'l.kJLAj '
for k = 1, 2, ... , n - 1. Note that the elemer1ts of matrix R and vector care giv(~n a.s
.
r··-a('i.)
'/,.} - '·i.J ' c·-b('i)
' /. - ri, ··(·z--1
) - i ... '
·r'·J·-·z·
IJ'. - ' ... i
'1')
(., .
with pPntmtatiou of k-th and r-t.l1 row (equations) awl k-th and s-th colunm (un-
lntowJts). Such method is call(~<l th(~ nH~tlwd with total choice of pivotal (~lc~mt-~nt..
Orw can show (sc~c [1], pp. 2:3:)-2:~4) that total number of calculations by applying;
Gauss methocl i~
For ·11. hip; f-~noug;h, om~ g;cts N(n) ~ 2n 3 j:3. It was long time opinion that Gauss nu~tltod
is optimal n~g;anling llllllllHT of computatiow,;. Nowad;:tys, V. St.rassPll.. by involvillJ!,
it(~rativ(~ alg;mitlm1 for umH.iplyiiLg awl invc~rsc of matrices, gave a new md.lwd for
solntiou of system of liucar cqnatious, by which th<~ munhcr of computations is of orrl<T
n 10 !2;" 7 . Strasscn uwt.lwd is t.lms better than Gauss method lug 2 7 < :).
Triaugula.r rcdntJ.iou ohtaius simpl(~ computation of system dd<~nuinant. Namdy.
it. holds
•t A_ (1) (2) (n)
( l<. • - o.ll n.22 · · ·a.,.., ·
\VlH'll usPd Ganss uwtlwd with dJOicc of pivot;d dement. ow~ slwuld take care ahont.
llllllllwr of JH~ntmt.at.ious of rows (awl cohmms i>y using md.hod of total choice of pivotal
ckuw11t.). what iu!inenc<~s the sigu of dctenuiu;lut.. This way of clctermiuaut cakulat.imt
i~ high dhci(~llt. For cxawpl<·. for udcnlatio11 of dct.tTmin;mt of order n = :)(). ouc uccds
O.los. p1-csmuing that. ouc arit.huwl.ic O]H~r;li.iou t.c1kcs lOtt..'i.
· L<~sson II - LilH~m Syst<~ms of Alg·ehraic Equations: Direct Methods 21
X=
[ :1:21
'11
:r:12
:1:22 <ln
:r:2n 1- [:I\
~
:D2
be its inverse matrix. V<~ctors :1! 1 , :1! 2 , . . . , :l!n are first, second, ... , n-th colmnn of matrix
X. Let us now define v<~ct.m·s (71 , P2 , ... £;,, as
R<-~ganling to eqnality
For solving of syst<~m (2.2.3.1) it is convenient to nse Ganss method, taking in account
that matrix A app<-~ars as a matrix of all syst<~ms, so that its triangular reduction shell
be done once only. By this proccdun~ all the transformations necessary for triangular
reduction of matrix A should he applied to the unit matrix I = [e'1 e2 ... €.n.] to(i. In
this way matrix A tnmsfonns to triangular matrix R, and ma.trix I to matrix C =
[21 22 ... C.,J Finally, triangular systems of form
should be solved.
(2.2.4.1) Ax=b,
•
with quadratic matrix A, which all main diagonal minors are zero different. Then,
based on Theorem 2.1.1.1, it exists factorization matrix A= LR, where L lower awl
R upper triangular matrix. The factorization is unique defined, if, for example, one
adopts unit diagonal of matrix L. In this case, system (2.204.1), i.e. system LR:r = b
can be presented in equivalent form
Based on previous, for solving of system of equations (2.2.4.1), the following method
can bP formulated:
1. Put l-i·i = 1 ('i ___.: 1, . .. ,n);
22 Numerical Methods in Computational Engineering
·i-1
. = -'{'· ( O,ij.
T.;,7· - ~ 'f"l.:·iTA:·i)
6 ·' (j = ·i + 1, ... , n) .
't:t. A:=l
(!.) (k:)
(J,·nl.: (J.n11
Lesson II - Linear Systems of Alg·ebraic Equations: Direct Methods 23
Let us analyze mo'Tlification of elements a.;,j ( = o..)~)) during the process of triangular
reduction. Because, f{n· k = 1, 2, ... , n- 1,
(h:+1) _ (h:) (A:)
a..i.j - a.,.i - m.;,A,ah::i (i, .i = k + 1, ... , n),
and
(1.:+1) - (1.:+1) - - (k+1) ()
a.;.l - a.;,2 - ... -a.;,~;, = (i=k+1, ... ,n),
by summation we get
('i ::; j)
and
(i > j).
By defining m,;,.; = 1 ('i = 1, ... , ·n), the last two equalities can be given in form
where p = min('i, .i). Equality (2.2.4.3) is pointing out that Gauss elimination procedure
l
gives LR factorization of matrix A, where
'f'12
[ '11/.21
1 1
['11 'f'22 'In
't2n
L=
'lrl,n1 'fr/,n2 J R=
Tnn
and Tk.i = ai,~?. During program realization of Gauss method in order to obtain LR
factorization of matrix A , it is not necessary to use new memory space for matrix
L, but it is convenient to load factors m.iA: in the place of matrix A coefficients which
are annulled in procest> of triangular reduction. In this way, after completed triangular
reduction, in the memory space of matrix A will be memorized matrices L and R,
according to following scheme:
A* LR
Cormider that cliagoual ele1i1ents of matrix L, all equal to unit, should not be
memorized.
Cholesky method, based on LR factorization, is used when matrix A fulfils condi-
tions of Theorem 2.1.1.1. Neverthdess, usability of this method canbe broaden to other
systems with regular matrix, taking in account permutation of equations in system. For
factorization is used Gauss elimination method with pivoting. There will be LR = A',
where matrix A' i8 obtained hom matrix A by finite number of row interchange. This
means that in elimination process set of indices of pivot elements I = (Pl, ... , Pn-1),
where PA: it> number of row from which the main element is taken in k-th elimination
step, should be memorized. By solving of system A.i = b, after accomplishing a pro-
cess of factorization, .according to set of indices I, coordinates of vector b should be
permuted. In this way the transformed vector b' is obtained, so that solving of given
system reduces to successive solving of triangular systems
r~~
J 0
[~
-:~
A= ()
4
1
1
-2
2]
5
0 '
and B=
2
4
-1
~62 j '
-2 J 1 J -1 5 1
MATRIX C=B(TR)* A
1.0 2.0 -5.0 -1.0
-3.0 21.0 11.0 29.0
-8.0 -19.0 -9.0 -27.0
C===~==========~=========================~===========
C CHOLESKY METHOD
C==============:::r=====================================
DIMENSION A(10,10), B(10)
OPEN(8,FILE='CHOLESKY.IN')
OPEN(S,.FILE='CHOLESKY.OUT')
33 READ(8,100)N
100 FORMAT(I2)
IF(N)11,22,11
11 READ(8,101)(B(I),I=1,N)
101 FORMAT(8F10.4)
26 Numerical Methods in Computational Engineering
STOP
END
For factorization of matrix A(= LR) we take in upper triangular matrix R unit
diagonal, i.e. 7'-i·i = 1 ('i, = 1, ... , n). Program is organized in this way so that matrix
A transforms to matrix A 1 , which lower triangle (including main diagonal) is equal to
matrix L, and strict upper triangle to matrix R. Note that diagonal elements in matrix
R are not memorized, hut only formally printed, using statement FORMAT. Note also
that in Section 2.2.4. the unit diagonal has been adopted into matrix L.
By applying this program to the applicable system of equations, the following
results are obtained:
MATRIX DIMENSION = 4
MATRICA A VEKTOR B
1.0000000 4.0000000 1.0000000 3.0000000 9.0000000
.0000000 -1.0000000 2.0000000 -1.0000000 .0000000
3.0000000 14.0000000 4.0000000 1.0000000 22.0000000
1.0000000 2.0000000 2.0000000 9.0000000 14.0000000
MATRIX L
1.0000000
.0000000 -1.0000000
3.0000000 2.0000000 5.0000000
1.0000000 -2.0000000 -3.0000000 2.0000000
MATRIX R
1.0000000 4.0000000 1.0000000 3.0000000
1.0000000 -2.0000000 1.0000000
1.0000000 -2.0000000
1.0000000
VEKTOR OF SOLUTIONS
1.0000000
1.0000000
1.0000000
1.0000000
Program 2.2.5.5. In similar way can be realized square root method for solution
of system of linear equations with symmetric, positive definite matrix. In this case it is
enough to read in only main diagonal elements of matrix A, and, for example, elements
from upper triangle. ·
The program and output listing for given system of equations are g·iven in the
following text. Note that from the point of view of memory usage it is convenient to
treat matrix A as a vector. Nevertheless, due to easier understanding, we did not follow
this convenience on this place.
Program is· organized in this way so that, in addition to solution of system of
equation, the determinant of system is also obtained. In output listing the lower triangle
of symmetric matrix is omitted.
$DEBUG
C=================================================
C SOLUTION OF SYSTEM OF LINEAR EQUATIONS
C BY SQARE ROOT METHOD
C=================================================
biMENSION A(10,10),B(10)
OPEN(B,FILE='SQR.IN')
OPEN(5,FILE='SQR.OUT')
3 READ(8,100)N
100 FORMAT(I2)
IF (N) 1, 2, 1
28 Nmnt~rical Methods in Computational Engineering
C READ IN VECTOR B
1 READ ( 8 , 10 1) ( B(I ) , I= l, N)
101 FORMAT(8F10.4)
C READ IN UPPER TRIANGULAR PART OF MATRIX A
READ ( 8 , 10 1) ( (A (I , J) , J =I , N) , I= 1 , N)
WRITE(5,102)
102 FORMAT(////5X, 'MATRIX OF SYSTEM'/)
WRITE(5,99)((A(I,J),J=I,N),I=1,N)
99 FORMAT(<12*I-11>X,<N-I+1>F12.7)
WRITE(5,105)
105 FORMAT(//5X, 'VECTOR OF FREE MEMBERS'/)
WRITE(5,133)(B(I),I=1,N)
133 FORMAT(1X,10F12.7)
C OBTAINING OF ELEMENTS OF UPPER TRIANGULAR MATRIX
A(1,1)=SQRT(A(1,1))
DO 11 J=2,N
11 A(1,J)=A(1,J)/A(1,1)
DO 12 I=2,N
S=O.
IM1=I-1
DO 13 K=1,IM1
13 S=S+A(K,I)*A(K,I)
A(I,I)=SQRT(A(I,I)-S)
IF(I-N) 29,12,29
29 IP1=I+1
DO 14 J=IP1,N
S=O.
DO 15 K=1,IM1
15 S=S+A(K,I)*A(K,J)
14 A(I,J)=(A(I,J)-S)/A(I,I)
12 CONTINUE
C CALCULATION OF DETERMINANT
DET=1.
DO 60 I=1,N
60 DET=DET*A(I,I)
DET=DET*DET
C SOLUTION OF SYSTEM L*Y=B
B(1)=B(1)/A(1, 1)
DO 7 I=2,N
IM1=I-1
S=O.
DO 8 K=1, IM1
8 S=S+A(K,I)*B(K)
P=1. I A(I, I)
7 B(I)=P*(B(I)-S)
c
C SOLUTION OF SYSTEM R*X=Y
C MEMORIZING OF RESULTS INTO VECTOR B
c
B(N)=B(N)/A(N,N)
NM1=N-1
DO 30 II=1,NM1
JJ=N-II
S=O.
JJP1=JJ+1
DO 50 K=JJP1,N
50 S=S+A(JJ,K)*B(K)
30 B(JJ)=(B(JJ)-S)/A(JJ,JJ)
c
· Lcssou II - Liw~ar Syst.c~ms of Alg<-~hmic Equaticms: Direct MPthods 29
MATRIX OF SYSTEM
3.0000000
.0000000
1.0000000
2.0000000
1.0000000
1.0000000
VECTOR OF FREE MEMBERS
4.0000000 3.0000000 3.0000000
MATRIX R
1. 7320510
. 000000'0
.5773503
1.4142140
.7071068
.4082483
SYSTEM DETERMINANT D= 1.0000000
SYSTEM SOLUTION
.9999999 .9999998 1.0000000
SUBROUTINE LRFAK(A,N,IP,DET,KB)
DIMENSION A(1),IP(1)
KB=O
N1=N-1
INV=O
DO 45 K=1,N1
IGE= (K -1) .*N+K
c
C FINDING THE PIVOTAL ELEMENT IN K-TH
C ELIMINATION STEP
c
GE=A (IGE)
I1=IGE+1
I2=K*N
30 Numerical Methods in Computational Engineering
IMAX=IGE
DO 20 1=11,12
IF(ABS(A(I))-ABS(GE)) 20,20,10
10 GE=A(I)
IMAX=I
20 CONTINUE
IF(GE)25,15,25
15 KB=1
c
C MATRIX OF SYSTEM IS SINGULAR
c
RETURN
25 IP(K)=IMAX-N*(K-1)
IF(IP(K)-K) 30,40,30
30 I=K
IK=IP(K)
c
C ROW PERMUTATION
c
DO 35 J=1,N
S=A(I)
A(I)=A(IK)
A(IK)=S
I=I+N
35 IK=IK+N
INV=INV+1
c
C K-TH ELIMINATION STEP
c
40 DO 45 I=I1,I2
A(I)=A(I)/GE
IA=I
IC=IGE
K1=K+1
DO 45 J=K1,N
IA=IA+N
IC=IC+N
45 A(IA)=A(IA)-A(I)*A(IC)
c
C CALCULATION OF DETERMINANT
c
DET=1.
DO 50 I=1,N
IND=I+(I-1)*N
50 DET=DET*A(IND)
IF(INV-INV/2*2) 55,55,60
60 DET=-DET
55 RETURN
END
c
c
SUBROUTINE RSTS(A,N,IP,B)
DIMENSION A(1),IP(1),B(1)
c
C SUCCESSIVE SOLUTION OF TRIANGULAR SYSTEMS
c
N1=N-1
C VECTOR B PERMUTATION
DO 10 I=1,N1
Lesson II - Linear Systems of Algebraic Equations: Direct Methods 31
I 1=IP'{ I)
IF(I1-I) 5,10,5
5 S=B(I)
B(I)=B(I1)
B(I1)=S
10 CONTINUE
C SOLUTION OF LOWER TRIANGULAR SYSTEM
DO 15 K=2,N
IA=-N+K
K1=K-1
DO 15 I=1,K1
IA=IA+N
15 B(K)=B(K)-A(IA)*B(I)
C SOLUTION OF UPPER TRIANGULAR SYSTEM
NN=N*N
B(N)=B(N)/A(NN)
DO 25 KK=1,N1
K=N-KK
IA=NN-KK
I=N+1
DO 20 J=1,KK
I=I-1
B(K)=B(K)-A(IA)*B(I)
20 IA=IA-N
25 B(K)=B(K)/A(IA)
RETURN
END
DIMENSION A(100),B(10),IP(9)
OPEN(8,FILE='FACTOR.IN')
OPEN(5,FILE='FACTOR.OUT')
READ(8,5)N
32 Numerical Methods in Computational Engineering
5 FORMAT(I2)
NN=N*N
READ(8,10)(A(I),I=1,NN)
10 FORMAT(16F5.0)
WRITE(5,34)
34 FORMAT(1H1,5X,'MATRICA A'/)
DO 12 I=1,N
12 WRITE(5,15)(A(J),J=I,NN,N)
15 FORMAT(10F10.5)
CALL LRFAK(A,N,IP,DET,KB)
IF(KB) 20,25,20
20 WRITE(5,30)
30 FORMAT(1HO,'MATRICA JE SINGULARNA'//)
GO TO 70
25 WRITE(5,35)
35 FORMAT(1H0,5X, 'FAKTORIZOVANA MATRICA'/)
DO 55 I=1,N
55 WRITE(5,15)(A(J),J=I,NN,N)
WRITE(5,75)DET
75 FORMAT(/5X,'DETERMINANTA MATRICE A='F10.6/)
50 READ(8,10,END=70) (B(I),I=1,N)
WRITE(5,40)(B(I),I=1,N)
40 FORMAT(/5X,'VEKTOR B'//(10F10.5))
CALL RSTS(A,N,IP,B)
WRITE(5,45) (B(I),I=1,N)
45 FORMAT(/5X,'RESENJE'//(10F10.5))
GO TO 50
70 CLOSE(5)
CLOSE(8)
STOP
END
1 MATRICA A
3.00000 1.00000 6.00000
2.00000 1.00000 3.00000
1.00000 1.00000 1.00000
0 FAKTORIZOVANA MATRICA
3.00000 1.00000 6.00000
.33333 .66667 -1.00000
.66667 .50000 -.50000
DETERMINANTA MATRICE A= 1.000000
VEKTOR B
2.00000 7.00000 4.00000
RESENJE
18.99999 -7.00000 -8.00000
VEKTOR B
1.00000 1.00000 1.00000
RESENJE
.00000 1.00000 .00000
Program 2.2.5. 7. Using snbroutine LRFAK and RSTS, having in mind section
2.2.3, it is Pasy to writ<~ pro~ram for matrix inversion. The corresponding program and
output result (for matrix from pn~vious example) have the following forms:
C================================================
C INVERZIJA MATRICE
C=============================================~==
· Lesson II - Linear Systems of Algebraic: Equations: Direct Methods 33
1 MATRICA A
3.00000 1.00000 6.00000
2.00000 1.00000 3.00000
1.00000 1.00000 1.00000
0 INVERZNA MATRICA
-2. 00000 5. 00000 .-3; 00000
1.00000 -3.00000 3.00000
1.00000 -2.00000 1.00000
Bibliography
[1] Milovanovic:, G.V., Numerical Analysis I, Naucna knjiga, Beograd, 1988 (Ser-
bian).
[2] Milovanovic, G.V. and Djordjevic, Dj.R., Programiranje numerickih metoda
na FORTRAN jeziku. Institut za dokumentaciju zastite na radu "Edvard
Kardelj", Nis, 1981 (Serbian).
(The full list of references and further reading is given on the end of Chapter 3.)
Faculty of Civil Engineering Faculty of Civil Engineering and Architecture
Belgrade · · · Nis
Master Study Doctoral Study
COMPUTATIONAL ENGINEERING
LECTURES
LESSON III
3.1. Introduction
Consider system of linear equations
(3.1.2) Ax=b,
where
A=
[auan
(J,,;J,l
(},12
(},22 a1n
a2n l [x1 l
'
x=
X2
:r::n
'
b=
~
l:tl
an2 ann
In this chapter we always suppose that system (3.1.1), i.e. (3.1.2) has an unique solution.
Iterative methods for solving systems (3.1.2) have as goal determination of solution
x with accuracy given in advance. N arnely, -starting with arbitrary vector if( D) ( =
(O) (O))T) b y 1teratlve
. . x~(k)· ( = [ x 1(A:). ... Xn(A:))T) sueh
. met lw:d s one d efi nes t l1e senes
[:r 1 ... Xn
that
lim fi(k) = x.
k-++oo
(3.2.1)
35
36 Numerical Methods in Computational Engineering
Starting from arbitrary vector :f!(O) and using (3.2.2) one generates series {.i(h:)}, which
under some conditions convergc~s to solution of given system. . ·
If
bl2
B=
[1!11
b21 b22
.
bl,.
b2n 1 and - = [ !32
fii. 1'
(3
bnl bn'2 bnn ' f3n
where k = 1, 2, ....
One can prove (see [1]) that iterative process (3.2.2) converges if all eigenvalues
of matrix B are by modulus less than one. Taking in account that determination
of eigenvalues of matrix is rather complicated, in practical applications of method of
simple iteration only sufficient convergence conditions are examined. Namely, for matrix
B several norms can be <ldined, as for example,
It is not difficult to prov<~ that iterativ<~ process (3.2.2) converges if liB II < 1, for arbitrary
iui tial vectm :[C 0 ) . . .
This method can he modified in this way so that for computation of :dkl are used
.c1 ll prcvwus ~,
. , . ·ly compn t.e< 1 va.
. llWS. .r., .(h:) , ... , .:r:.,_
, (A:) (A:- 1 ) ( k: - 1 ) d ~ .
1 1 , :r:., , ... , Xn an the rest w1ll be part
of vector, obtained in previous iteration, i.e
where
Bl=
[ b21
0
.
0
0
0
0
B2=
h2
b22 bl,.
IJ2n 1
b.,~.l bn2 bn,n-l
:1 ' [I 0 bnn
(3.3.2)
Theorem 3.3.1. For arbitrary vector :£( 0 ), iterative process (:3.3.2) converges then and
only then if all mots of equation
bn- A
SUBROUTINE NORMA(A,N,K,ANOR)
DIMENSION A(1)
NU=N*N
ANOR=O
GO TO (10, 20,40),K
10 DO 15 I=1,NU
38 NumeriC:al Methods in Computational Engineering·
15 ANOR=ANOR+A(I)**2
ANOR=SQRT(ANOR)
RETURN
20 DO 25 I=1,N
L=_:N
S=O.
DO 30 J=1,N
L=L+N
IA=L+I
30 S=S+ABS(A(IA))
IF(ANOR-S) 35,25,25
35 ANOR=S
25 CONTINUE
RETURN
40 L=-N
DO 50 J=1,N
S=O.
L=L+N
DO 45 I=1,N
LI=L+I
45 S=S+ABS(A(LI))
IF(ANOR-S) 55,50,50
55 ANOR=S
50 CONTINUE
RETURN
END
Main prognun is organized iu this way that before iteration process begins, the
converg'C~nce is examined. Namdy, if at least one norm satisfies liB Ilk < 1 (k = 1, 2, 3),
iterative process proce<~<ls. and if not, the message that convergence comlitions are not
fulfilled is printed and program terminates.
For multiplying matrix B by vector :r(k+l) we use subroutine MMAT, which is
given in 2.2.5.2. As initial v<~ctm we take f(O).
- As criteria for t<~nuiuation of iterative process we adopted
On output we print the bst iteration which fulfills above given criteria.
Taking H.ccnracy c = lo-s, for one concrete system of equation of fourth degree
(see output listiug) we g(~('. the solution in fomtcenth iteration.
(3.4.2.1)
Let ·'h be k-th partial Slllll of series (3.4.2.1), and uh: its general mem1H'~r. Then tlH~
equalities ·
(3.4.2.2)
hold, whereby U 0 = S 0 =I (unity matrix of order n). By using equality (3.4.2.2) one
can write a program for smnmation ofseries (3.4.2.1), where we usually take as criterict
for termination of sm1muttion the case when norm of matrix is lesser than in advance
given small positiw~ number E:. In om case we will take norm 11.112 (see formula (3.2.3))
and c = 10- 5 .
40 Nnmel'ical Methods in Computational Engineering
C===========================~====================~=
C ODREDJIVANJE MATRICE EXP(A)
C==================================================
DIMENSION A(100), S(100), U(100), P(100)
OPEN(S,FILE='EXPA.IN')
OPEN(5,FILE='EXPA.OUT')
READ(8,10) N,EPS
10 FORMAT(I2,E5.0)
NN=N*N
READ(8,15)(A(I),I=1,NN)
15 FORMAT (16F5. 0) _
C FORMIRANJE JEDINICNE MATRICE
DO 20 I=1,NN
s (I) =0.
20 U(I)=O.
N1=N+1
DO 25 I=1,NN,N1
S(I)=1.
25 U(I)=l.
C SUMIRANJE MATRICNOG REDA
K=O
30 K=K+1
CALL MMAT(U,A,P,N,N,N)
B=1./K
DO 35 I=1,NN
U(I)=B*P(I)
35 S(I)=S(I)+U(I)
C ISPITIVANJE USLOVA ZA PREKID SUMIRANJA
CALL NORMA(U,N,2,ANOR)
IF(ANOR.GT.EPS)GO TO 30
WRITE(5,40) ((A(I) ,I=J,NN,N) ,J=1,N)
40 FORMAT(2X,<5*N-9>X,'M AT RIC A A'
1 //(<N>F10.5))
WRITE(5,45)((S(I),I=J,NN,N),J=1,N)
45 FORMAT(//<5*n-9>X, 'MAT RIC A EXP(A)'
1 //(<N>F10.5))
CLOSE(8)
CLOSE(5)
END
This program has been t<~ste< l on the exampl<~
A= [442 ~ =~]'
4 -3
(3.4.2.3) A=
3e- 2 3e- 3
2c- 2 2c- 1
-3e ++ 3]
-2e 2 .
r 4r:- 4 4e- 4 -4e +5
Outpnt listiup; is of form
LeHson III - LilH~ar SyHtemH of Algebraic Equations: Iterative Methods 41
M AcT RI C A A
4.00000
3.00000
-3.00000
2.00000
3.00000
-2.00000
4.00000
4.00000
-3.00000
MA T R I C A EXP(A)
16.73060
14.01232
-14.01232
9.34155
12.05983
-9.34155
18.68310
18.68310
-15.96482
By using (3.4.2.3) it is not hard to prove that all figures in obtained results are
exact.
It is suggested to readers to write a code for previous problem using program
Mathematica.
Some general guidelines for selecting a method for solving systems oflinear algebraic
equations are given as follows.
• Direct elimination methods are preferred for small systems (n ~ 50 to 100) and
systems with few zeros (nonsparse systems). Gauss elimination is the method of
choice. ·
• For tridiagonal systems, the Thomas algorithm is the method of choice ([3], Char)ter
1).
• LU factorization methods are the methods of choice when more than one vectors b
must be considered. ,
• For large systems th<tt are not diagonally dominant, the round-off errors can be
large.
• Iterative methods are preferred for large, sparse matrices that are diagonally domi-
nant. The SOR (Successive-Over-Relaxation) method is recommended. Numerical
experimentation to find the optimum over-relaxation factor w is usually worthwhile
if the system of equations is to be solved for many vectors b.
[1] Milovanovic, G.V., Numerical Analysis I, Naucna knjiga, Beograd, 1988 (Ser-
bian).
[2] Milovanovic, G.V. and Djorcljevi{:, Dj.R., Programiranje numerickih metoda
na FORTRAN jeziku. Institut za clokumentaciju zastite na radu "Eclvard
Kardelj", Nis, 1981 (Serbian). ·
[3] Hoffman, .J.D., Numerical Methods for Engineers and Scientists. Taylor
& Francis, Boca Raton-London-New York-Singapore, 2001.
[4] Press, W.H., Flannery, B.P., Teukolsky, S.A., and Vetterling, W.T., Numerical
Recepies - The Art of Scientific Computing. Cambridge University Press,
1989.
[5] Albrecht, .J., Fehlerabschiitzungen bei Relaxationsverfahren zur nu-
merischen Auflosung linearer Gleichungsysteme. Numer. Math. 3(1961),
188-201.
[6] Apostolatos, N. unci Kulisch, U., Uber die Konvergenz des Relaxionsvr-
fahrens bei nicht-negativen und diagonaldominanten M atrizen. Com-
puting 2(19G7), 17-24.
[7] Golub, G.H. & Varga, R.S., Chebyshev semi-iterative methods, successive
overrelaxation iterative methods and second order Richardson itera-
tive methods. Nunu~r. Math. 3(1961), 147-168.
[8] Soutwell, R.V., Relaxat'ion Methods in Theoretical Physics. 2 vols. Oxford
Univ. Press. Fair Lawn., New .Jersey, 1956.
[9] Varga, R.S., Matrix Iterative Analysis. Prentice Hall, Englewood Cliffs, New
Jersey, 1962.
[10] Young, D.M., Iterative Solution of Large Systems. Academic Press, New
York, 1971.
[11] Ralston,A., A First Course in Numerical Analysis. McGraw-Hill, New York,
1965.
[12] Hildebrand, F.B., Introduction to Numerical Analysis. Me.Graw-Hill, New
York, 1974.
[13] Acton, F.S., Numerical Methods That Work (corrected edition). Mathemat-
ical Association-of America, Washington, D.C., 1990.
[14] Abramowitz . M., aucl Steguu, I.A., Handbook of Mathematical Functions.
National Bnn~au of St.awlanls, Applied Mathematics Series, Washington, 1904
(n~printed ElG8 hy Dowr Publications, New York).
[15] Rice, .T.R., Numerical Methods, Software, and Analysis. McGraw-Hill, New
York, 1983.
·l_A~SSOll 111 - LllW<Il. 0'{SWlllS 01 .tuger>r<tH: .DqUaciOns: !Wl'af.lVe lVleClHHlS -!.j
[Hi] Forsythe, G.E.,, Malcolm, M.A .. awl Moler, C.B., Computer Methods faT
M athemat·ical C omp·utat·ions. Engl<~woocl Cliffs, Prentic<~- Hall, N .J, 1977.
[17] Forsythe, G.E., Solving linear algebra·ic equations can be interesting. Bnll.
Amer. Math. Soc. 59(1%3), 299-329.
[18] Forsythe & I\!Iolcr, C.B., Computer Solution of Linear Algebraic Systems.
Prentice-Hall. Englewood Cliffs, N.J, 1%7.
[19] Kahaner, D., Moler, C .. awl Nash, S., 191)9, Numerical Methods and Soft-
ware. Eugl<~woocl Cliffs. Prentice Hall, NJ, 1989.
[20] Hamming, R.W., Num.erical Methods for Engineers and Scientists. Dov<~r,
N<~w York, 1%2 (rcpriut(~<l 191:lG).
[21] Fcrziger, .J.I-L Numerical Methods for Engineering Applications. Stanford
Univ<~rsity, John Willey & Sons, Inc., New York, 1998
[22] P<~arson, C.E., Numerical Methods ·in Engineering and Science. University
of \Vashington. Van Nost.mwl R<~inholcl Company, New York 198G.
[23] Steplwnsou, G. awl Raclmon~. P.M., Advanced Mathematical Methods for
Engineer-·ing and S ciencc Students. Imperial College Low lou, University Col-
lege, Loudon CmHhridgc Univ. Press, 1999
[24] l\!Iilovanovi{:, G.V. and Kova<:<~vi(:, M.A., Zb·irka re,9enih zadataka iz nu-
merick:e anal·ize. Nau<:na lm.iiga, Beogra<l, 191:l5. (Serbian).
[25] IMSL Math/Libr-ary Users Manual , IMSL Inc., 2500 City West Boulevard,
Houston TX 77042
[2G] NAG Fortran L-ibrary, Nmm~rica.l Algorithms Group, 25G Banbnry Road, Ox-
ford OX27DE. U.K.
[27] Dongarra, .T..T., et al., LINPACK Users Guide. S.I.A.M., Philadelphia, 1979.
[28] Golub, G.H., awl Vau Loau,C.F., Matr-ix Computations, 2nd ed. (Baltimme:
Johns Hopkins Univ<~rsit.y Pn~ss), 1989.
[29] Gill, P.E., Murray, W .. and Wright, M.H., Numerical Linear Algebra and
Opt·imizat·ion, val. 1. Addison-Wesley, Redwood City, CA, 1991.
[30] St.ocr, J., and BulirsdL R., Introduction to Numerical Analysis. Springer-
V<·~rlag, N<~w York, 1980.
[31] Colen~an, T.F.,and Van Loan,C., Handbook for Matrix Compv.tations.
S.I.A.M., Philacldphia, 1988.
[32] Wilkinson, J.I-I., awl Reinsch, C. , L-inear Algebra, val. II of Handbook for
A·utomatic Computat·ion. Springer-Verlag, New York, 1971.
[33] \iV<~stlake, J.R A Handbook of Numerical Matr-ix Inversion and Solution
of Linear Eq1wtions. vViky, New York, 1%8.
[34] Johnson, L.W., awl Riess, R.D., Numerical Analysis, 2nd ed. A<ldison- ·wesley,
R<~ading, MA, 1982.
[35] Ralston, A., and Ral>iuowitJ~;, P., A First Course in Numerical Analysis,
2nd ed. McGraw-Hill. N<~w York. 1978.
[36] Isaacson, E., awl Kdl<,T, 1-I.B., A~alysis-o.f Numerical Methods. Wiley, New
York. 1966.
[37] Hom.,R.A., alHl Johnson, C.R, Matrix Analysis. Cambridge University Press,
C~tmhriclge, 1~)85.
Faculty of Civil Engine<~ring Faculty of Civil Engineering and Architect.un~
Belgrade ~ Nis
Master Study Doctoral Study
COMPUTATIONAL ENGINEERING
LECTURES
LESSON IV
4. Eigenvalue Problems
4.1. Introduction
An nxn matrix A is said to have an eigenvectorx and corresponding eigenvalue
). if
(4.1.1) A· x = ,\x
Obviously any multiple of an eigenvector x will also be an eigenvector, hut we won't
consider such multiples as being distinct eigenvectors. (The zero vector is not considered
to be an eigenvector at all). Evid<mtly (4.1.1) can hold only if
Definitions
A matrix is called symmetric if it is equal to its transpose,
(4.1.3)
(4.1.4)
45
46 NunH~1·ical Methods in Computational Engineering
(4.1.5)
and unitary if its Hermitian conjugate equals its inverse. Finally, a matrix is called
normal if it commutes with its Hermitian conjugate,
(4.1.6)
For real matrices, Hennitiau means the same as symmetric, unitary means 'the same as
orthogonal, and both of these distinct classes are normal.
The reason that "Hermitian" is an important concept has to do with eigenvalues.
The eigenvalues of a Hermitian matrix are all real.: In particular, the eigenvalues of a
n~al symmetric matrix are all real. Contrariwise, the eigenvaln<~s of a real nonsymmetric
matrix may inchul<~ real values, bnt 1ll<W abo inclmk lJttirs of conjugate v<1lues; and the
eigenvalues of a complt~x matrix that is not Hermitian will in general be complex.
The reason that "normal" is an important concept has to do with the eigenvectors.
The eigenvectors of n. normal matrix with non-degenerate (i.e., distinct) eigenvalues
are compld.e and orthogonal, spanning th<~ ·n-dimensional vector space. For a nonnal
matrix with degenerate eigenvalues, we have the additional b:eedom of replacing the
eigenvectors corn~spondinp; to a d(~generate eigenvalue by linear combinations of them-
selves. Using this fi:e<~clom, we can always perform Gramm-Schmidt orthogonalization
and find a set of eigenvectors that arc cmnpktc and orthogonal, just as in the non-
dcgelH-~rate case. The matrix whose columns are an orthonormal set of <~igenvectors is
evidently unitary. A special case is that the matrix of eigenvectors of a real symmetric
matrix is orthogonal, sinu~ tlw t~igcnvectors of that matrix are all real.
When a matrix is not uon1Jal, as typiiiecl hy any random, nonsymmetric, real
matrix, then in general we canuot find any orthonormal set of eig·envectors, nor even
any pairs of eigenv<~ctors that arc orthogoual (except perhaps by rare chance). vVhile
the n 110n-orthononual eigenv<~ctors will "mmally" span the n-<limensioual vector spact\
they do not always do so; tlmt is, the eigcuvectors are not always complete. Such a
nmtrix is said to l><~ <lef(-~ctivc.
These an~ call<~<l left e·igenvectoTS. By takiug the -tnwspm;c of (Ll.l. 7), mw cm1
s<~c that every ldt cigcuvcdor is the transpose of a right cig<~nvcctm of the tnmspos<~ of
A. Now by compariug to (:1.1.2). all<l using the fact that the det<~rmiuant of a matrix
<~qm11s the <letcnniuaut of its transpose, we also s<~c that the kft awl right cip;euvalncs
of A an~ i<kntical.
If th<~ matrix A i,-; symllldric t.lH~u tlw ldt awl right cigenw~ctors arc just trauspos(~s
of each other. that is, h<tv<~ tlw sauw umlwrical vahws as compoucuts. Likewise, if the
matrix is sdf-ad.ioiut.. t.lH~ ldt awl right <~ig(~nvcctors an~ Hermitian conjugates of each
ot.lwr. For tlw g(~lwralnou-uonual cas<~, lww<~vcr. we h;w<~ the following cakulatiou: Let
Xn he the matrix formed hy cohtlllllS from tlw rigltt eigenvectors, alHl XL he the matrix
formed hy rows from the ldt <~ig<~llV<~d.ors. Then (11.1) and ( 4.1. 7) can h<~ rcwritt.<~n as
Multiplying the first, of these equations on the left by XL, the second on the right by
XR, and subtracting the two, gives
(4.1.9)
This says that the matrix of clot products of the left and right eigenvectors comnmtes
with the diagonal matrix of eigenvalues. But the only matrices that commute with a
diagonal matrix of distinct elements are themselves diagonal. Thus, if the eigenvalues
are non-degenerate, each left eigenvector is orthogonal to all right eigenvectors except
its corresponding one, and vice versa. By choice of normalization, the dot products of
corresponding left and right eigr.nvectors can always be made unity for any matrix with
non-degenerate eigenvalues.
If some eigenvalues are degenerate, then either the left or the right eigenvectors
corresponding to a degenerate <~ig<~nvalue must be linearly combined among themselves
to achieve orthogonality with the right or left ones, respectively. This can always be
clone by a procedure akin to Gram-Schmidt orthogonalization. The normalization can
then be ad.iustecl to give unity for the nonzero clot products between corresponding left
and right eigenvectors. If the clot product of corresponding left and right eigenvectors
is zero at this stage, then you have a case where the eigenvectors are incomplete. Note
that incomplete eigenvectors can occur only where there are degenerate eigenvalues,
but do not always occur in such cases (in fact, never occur for the class of "normal"
matrices).
In both the degenerate and non-degenerate cases, the final normalization to unity
of all nonzero dot products produces the result: The matrix whose rows are left eig<~n
vectors is the inverse matrix of tlw matrix whose columns are right eigenvectors, if the
inverse exists.
Diagonalization of a Matrix
Multiplying the first equation in (4.1.8) by XL, and using the fact that XL and
XR are matrix iuver:ses, we get
(4.1.10)
While real nonsymmetric matrices can be diagonalized in their usual case of complete
eigenvectors, the transformation matrix is not necessarily reaL It turns out, however,
that a real similarity transformation can "almost" do the job. It can reduce the matrix
down to a form with little two-by-two blocks along the diagonal, all other elements zero.
Each two-by-two block corresponds to a complex-conjugate pair. of complex eigenvalues.
The "grand strategy" of virtually all modern eigensystem routines is to nudge the
matrix A towards diagonal form by a sequence o~ similarity transformations,
A----+ PI -l . A. pl ----+ P2 1 . P1 1 . A. pl . p2
(4.1.14) 1
----+ P3 · P2 1 · P1 1 ·A· P 1 · P2 · P3 ----+ etc.
If we get all the way to diagonal form, then the eigenvectors are the columns of the
accumulated transformation
(4.1.15)
Sometimes we do not want to go all the way to diagonal form. For example, if we
are interested only in eigenvalues, not eigenvectors, it is enough to transform the matrix
A. to be triangular, with all elements below (or above) the diagonal zero. In this case
the diagonal elements are already the eigenvalues, as you can see by mentally evaluating
(4.1.2) using expansion by minors. .
There are two rather different sets of techniques for implementing the strategy
, (4.1.14). It turns out that they work rather well in combination, so most modern
eigensys.tem routines use both. The first set of techniques constructs individual Pi's
as explicit "atomic" transformations designed to perform specific tasks, for example
zeroing a particular off-diagonal element (.Jacobi transformation), or a whole particular
row or column (Householder transformation, elimination method). In general, a finite
sequence of these simple transformations cannot completely diagonalize a matrix. There
are then two choices: either use the finite sequence of transformations to go most of the
way (e.g., to some special form like tridiagonal or Hessenberg) and follow up with
the second set of techniques about to be mentioned; or else iterate the finite sequence of
simple transformations over and over until the deviation of the matrix from diagonal is
negligibly small. This latter approach is conceptually simplest. However, for n greater
than rv 10, it is computationally inefficient by a roughly constant factor rv 5.
The second set of techniques, called factorization methods, is more subtle.
Suppose that the matrix A can be factored into a left factor F L and a right factor FR.
Then
(4.1.16) A= FL · FR or equivalently F£ 1 ·A= FR
If we now multiply back together the factors in the reverse order, and use the second
equation in (4.1.16) we get
(4.1.17)
which we recogniz(~ as having effected a similarity transformation on A with the trans-
formation matrix being FL. The QR method which exploits this idea will be explained
later.
Factorization methods also do not converge exactly in a finite number of transfor-
mations. But the better ones do converge rapidly and reliably, and, when following an
appropriate initial reduction by simple similarity transformations, they are the methods
of choice. The presented considerations are very important for those dealing with dy-
namics of construction and seismic engineering, especial in the phase of modelling and
dynamic response computation. ·
Definition 4.1.1. Lot A= [a·i.i] IH! cOHlplex sq1wre matrix ofonler n. Evmy vector .FE
en, which is diffi~ro11t hom xen>-vector, is called eigenvector of matrix A if exists scalar
). E C sw:l1 that lwlds (4.1.1). Scalar). in (4.1.1) is called corresponding eigenvalm!.
Hewing in mind that (4.1.1) can l>o ]mJsented in tlw form
(A- >.I):r = 0,
we concll!de Uw.t e(jllil.tion ( 4.1.1) lws 11011- trivi;d solutions (i11 :/;') then a11d only tl1en if
holds (4.1.2).
Definition 4.1.2. If A is S(JWIH-! matrix, then polynomial). ---t P(>.) = cld.(A- >.I) is
called clwracteristic polynomial, awl conespondinp,· (-!rJIZation P(>.) = 0 its clu-t.racf;eristic
eqllation.
Ld. A= [a·i,i]nxn· Tlw dmracteriHtic polynomial can be expresHed iu the form
(J,ln
(},2n
P(>.) =
or
P(>.) = ( -1)n(xn- ]>lxn.-l + ]J2xn- 2 - · · · + (-1)n-l]Jn-1A + (-1)nPn),
wh<~re ]JJ.: is snm of all principal minorH of order k of cletermimmt of matrix A, i.e.
Note that
'fl.
Q(>.I), · · · 'CJ(>.n)
are eigenvalues of matrix Q(A).
Theorem 4.1.3. Let A1, ... , An he eigeliW!.lnes of regular matrix A of order n. Then
,-1 \-1
/\1 ' · · · ' /\n
and
0 0
B = c- 1 AC.
Theorem 4.1.6. Similar matrices have identical characteristic polynomials, and there-
with identical eigenvalues.
If we denote with C union of these discs, then all eigenvalues of matrix A are in C.
Remark 4.2.1. Regarding fact that matrix AT has same eigenvalues as matrix A, on
the basis of previous theorem one can conclude that all eigenvalues of matrix A are
located in the union of D discs
n
where s.i = 2.::: Iaij 1-
i=l
'i.=f=j
Based on previous one concludes that all eigenvalues of matrix A lie in the cut of
sets C and D.
Theorem 4.2.2. Ifm. discs from Theorem 4.2.1. form connected area which is isolated
from other discs, then exact m eigenvalues of matrix A are located in this area.
The proof of this theorem could be found in extraordinary monograph of Wilkinson
[7].
Lesson IV - Eigenvalue Problems 51
Example 4.2.1.
Take matrix
A= ~
r-0.2
0.1
2
-0.1]
0.4
0 3
Based on theorem 4.2.1 eigenvahws are located in discs
we say that AI, ... , A.,. arc dominant eigenvalues of matrix A. In this section we will
consider a method for determination of-dominant eigenvalue and corresponding eigen-
vector, as well as some modifications of this method. We suppose that eigenvectors are
linearly independent, forming a basis in nn. Therefore, the arbitrary non-zero vector
'1!0 can be expressed as
(4.3.1)
• The speeial interesting caRe hm·e is when one dominant eigenvalue A1 (r = 1) exists.
Assuming a1 =I 0, on the basis of (4.3.2) we have ·.
Because of
'ih+L = a1AA:+ 1(:r1 + EA:+l),
based on previous, for every 't (1 ::; 'i ::; n) we have
Based on this fact, the method for determination of dominant eigenvalue A1, known
as power method, can be formulated. Vector fh, is thereby an approximation of non-
normed eigenvector which corresponds to dominant eigenvalue*. By practical realization
of this method the nonning of eigenvector is perforined, i.e. of vector f!A: after every
iteration step. Norm-setting is performed by dividing vector 'l'h: by its coordinate with
maximal module. So, power method can be expressed by
where "YA: is coordinate of v<~ctor ZA: with maximal module, i.e., "YA: = (iA:)·;, and l(zA}i.l =
IliA: II· Note that "YA: ---+ A1 and Th, ---+ ll'~rll , when k ~ +oo.
,[,1 00
(4.3.3)
Note that by deriving of this method we. suppose that a1 i- 0, meaning that method
converges if A1 is <lominant <~igcnvalue and if initial vector 'Vo has a component with same
direction as eigenvector :1! 1 . On h<~havior of this method without those assumptions
one can find in the monograph of Wilkinson [7, p. 570] ancl Parlett ancl Poole [11].
Practically, due to round-oft' errors in iterative process, the condition a. 1 i- 0 will be
satisfied after few steps, although starting assumption for vector ·uo not being fulfilled.
Example 4.3.1.
Let
-2Gl 209
-49]
A= -G30 422 -98 '
[ -t\00 G31 -144
*
If :i! eigenvector, then n: ( c i- 0) IS also (~igenvector corresponding to the same
eigenvalue.
Lesson IV- Eigenvalue Problems 53
Table 4.3.1
k 'Yk ('ihh ('uh: )2 Uhh
1 144.0000 0.340278 0.680556 1.
2 13.2083 0.334911 0.669821 1.
3 10.7287 0.333774 0.667549 1.
4 10.2038 0.333463 0.666926 1.
5 10.0599 0.333372 0.666744 1.
6 10.0179 0.333345 0.666690 1.
7 10.0054 0.333337 0.666674 1.
8 10.0016 0.333334 0.666669 1.
9 10.0005 0.333334 0.666667 1.
10 10.0001 0.333333 0.666667 1.
11 10.0000 0.333333 0.666667 1.
(4.4.1)
Because of (i10 , :r\) = 0, from theoretical point of view, the series 17A: = A·uh:-l (k =
1, 2, ... ) in the power method could be used for determination of ...\ 2 and corresponding
eigenvector :£2 . Nevertheless, regardless of fact that ·iJ0 does not have the component
in direction of eigenvector :r1 , pmyer method would, because of round-off errors, after
some number of iterations, converge toward eigenvector x1 . This fact was mentioned in
the previous section.
It 1s possible to eliminate this influence of round,..off errors using so known periodical
"purification" of vector ·iJ0 from component in direction of i\. That means that, after,
say, T steps, we compute ·u0 using ·11, in place of z in (4.4.1), i.e. by means
In this way, if the period of "purification" is small enough so that it cannot happen sig-
nificant accumulation of round-off:' error, by power method can be determined eigenvalue
:r
...\2 and eigenvector 2 .
By continuation of this procedure we can f11rther determine ...\ 3 and x3 .
54 Numerical Methods in Computational Engineering
Generally, if we detc~rmine -X 1 , ... , AI/ and corresponding vectors :l1 , ... , ;l1/ (u < rn)
it is possible to determine \/+1 and :£1/+l using power method, by forming vector ·i70
orthogonal to :£1 , ... , .1!1/. So, starting fi:·om arbitrary vect<)l' z, we have
I/ (--+ --+ )
(4.4.2) --+ --+ """" z' :r: i --+
'U() = z- ~ (·--:. ,-t.) :~:-;,
·i=1 ,f,,,, .f.'/,
meaning that vector 'lf.o has components only in direction of residual eigenvectors, i.e.
Power method applied to ·i70 gives :ru+I and A1/+ 1 in absence of round-off errors. Being·
not the case, it is necessary frequent "purification" of vector 'VA: from components in
direction :l1 , ... , .in. In other words, after r· steps, one should determine again 0 using v
(4.4.2), by using 'if.r in place of z.
Also, in the case when matrix A is not symmetric, but has complete system of
eigenvectors, the given orthogcmalizing procedure can be applied.
2. Inverse iteration method. This method is applied to general matrix A and is
based on solution of system of equations
(4.4.3)
where p is constant, and ·iT0 arbitrary vector. System (4.4.3) is usually to be solved by
Gauss method of dimiuation or Cholesky method by factorization of matrix B = A- pl.
Note that tl.1~~ method of i~1verse iteratio.1~ is equi.valen~ to the po~er uHpwd applied.
to B. Therefore, by applymg mdhocl of mverse 1teratwn tlw domman!./f'Jgenvalne of
matrix B is obtained, i.e. /Lu = 1/(Au- p) for which it holds
(4.4.4)
1
10
1
LR = PB.
L<~sson IV - Eigenvalw~ Problems GG
21~Gl
1 () 5 1
= ()]
L
[
-4/G 1
~ ' R=
[
~ 9/G
- '1/r:.) 2/3 0 -1
Table 4.4.1
k Uh,h Uh:h Uh:h (J,,
1 0. 1. -1. 6.
2 -0.2 1. -O.G 9.3
3 -0.17241 1. -0.48276 9.34483
4 -0.17200 1. -0.48000 9.34800
5 -0.17185 1. -0.47980 9.34835
6 -0.17184 1. -0.47977 9.34838
For initial vector we took -IJ0 = [1 0 OjT. Iu the last column of table is given the
quantity jJ 1, = p + 1//~.:, which giv<~s appmximatiou for corresponding eig<~nvalue A. One
can see that this eigenvalue has approximate value 9.34838.
3. Deflation uwtho<ls. The methods of this kind are composed from construction
of sequence of matrices A~~. ( = A), A~~. _ 1 , ... , A 1 , which order is equal to index and
thereby
Sp (An) ~ Sp (An-1) ~ · · · ~ Sp (A1),
where Sp (A A:) denotes spcctnun of matrix A~,:.
We will describe now a special awl impmtant case of deflation method, when matrix
A is Hermitian.
Let :c = [:r 1 :1: 2 ... :rnJT be eigenvector of matrix A corresponding to eigenvalue A
and nonned
(:1!, :C) = :c * :1! = II:ZII~ = 1
with first coordinate :~: 1 being nonnegative.
Have a look over matrix
(4.4.G) P = I- 2'iihv*,
where the vector 'lv = [w 1 w2 ... 'll!nJT is defined by first vector e1 = [1 0 ... OjT from
natural basis of spac<~ nn in the following way:
(4.4. 7)
The matrix P is of form
P=
[
1- 2'W1'llJ1
-2'W27lJl
.
-2Wl'1V2
1- 2w211J2
-2'W1'1Vn
-2w2'i'iJn
I
-2'Wn,'llJl 1- 2Wn'lVn
5G Nnnwrical MetlwclR in Computational Engineering
p*p = p2 =I ,
A h2 !Jl.? h1n
0 !J22 1!2:3 b2n
~T ]
0,~-1
() b32 1!33 bn-1
B= h3n
[ An-1
() bn2 1Jn3 bnn
where with An- 1 we d<~note<lmatrix of order n- 1 which matches with enclosed block.
~ T~
In order to get 1wttrix A.,_ 2 w<~ are proceeding in a similar way. In place of matrix P
we use matrix
~T
P1=
[ On-1
~
1
o,Z/],
where matrix Q is of order '/1, - 1 awl of form (4. i.G), satisfying the conclitions ( L1.4. G)
1
ancl (4.4. 7) reganlinp.; cip.;euvcct.or il ;mel <~igcnvalnc JJ of matrix An-1· Becanse of P1 =
Pi = P 1 we condwlc that matrix P 1 is unitary, too.
Now matrix C = P 1 BP 1 = P 1 PAPP 1 has a form
An-2
() () C.,_ () ()
Cn3 'I/,
wh<~re matrix A~~._ 2 is of onkr n - 2. By colltillninp.; this procedure Wf-~ get npp<T
triangnlar matrix which is n11itary similar to iuitial matrix A. Having in mind that
matrix A is Hl~rmit.iau. wr~ cmH:lwk that it is nuitary similar to diag-onal matrix.
Tlw lll'<~s<-~uted pro<:<~dnn~ d<~umwls. hd'orc of every st<~p, clctc~nuinatioll of ow~ cig<~ll
V<thw ;mel concspondinp.; <~ig<~nv<~ctor, what can he <lone hy some of previonsly pn~s<~ut.ed
mctlwds. TlmR, hdorc thr~ Jirst stq>. mw has to dct.<~nniw~ eig<~uvalnc A <-UJd <~ig<~uvcd.m
Lesson IV - Eigenvalm~ Probl<~ms G7
:r of matrix A, befd-re the second step eigenvalue Jl. and eigenvector :1] of matrix A~~._ 1 ,
and so on.
~t is. clear that eigenvalnes of matrix A are diagonal elements of obtained triangnlar
matnx, 1.e. A1 = A, A2 = JL, etc. It remains the question what is with eigenvectors of
matrix A? It is dear that. :1\ = :7. We will show how, based on obtained results. the
second eigenvector of matrix A can b<~ found. ·
If the coordinates of eigenvector :IJ are y 2 , ... , Yn., in order to find, at first, eigenvector
:IJ' of matrix B, put :t/ 1 = [:1;1 :Y2 ... YnJT and try to determine :y 1.
Because of
B :tJ.... i = [ .. A
On.-1
l.e.
it follows
:t/1 + b,'7,_1:1J= :Y1·
If A =1- JJ, by virtue of previous equality we get
1 _,T _, 1
:Y1 = -,--bn-1:Y = -,--(b12:Y2
A-fl. /\-fl.
+ · · · + b1nYn)·
Now simply find the eigeuvector :r2 of matrix A, corresponding· to eigenvalue A2 =1- p..
Indeed, because of PAP:IJ' = JL:/j', i.e. A(P:t7') = /I(P:IJ') we conclude that 2 = P:t7'. x
In a similar way t.l1<~ other eigenvectors can be determined.
Theorem 4.5.1. (Givens) Let all elements~ Ck =1- 0 of symmetric tTidiagonal matrix A
of order n. Then it lwld8:
(1) Zeros of every polynomial Pk ( k = 2, ... , n) are real, diffe:i·ent, and separated by
zeros of polynomial Ph:-1; ~
(2) If Pn (>.) =1- 0, number of eigenvalues of matrix A less than .\ is equal to numb~r of
sign change s ( >.) in the series .
If some Pk:(>.) = 0, then on this place in series (4.5.2) can be taken arbitrary sign,
regarding to Ph:-1 (>.)PA:+1 (>.) < 0.
Remark that in theorem there exists condition Ck =1- 0 for every k = 2, .. :, n. If,
for example, for some k = m, c.m. = 0, then problem simplifies, because it splits in two
problems of lower order (m and n- m.). Namely, matrix A becomes
----
A' 0 ]
A== [ 0 A" '
where A' and A" an~ tridiagonal syminetric matrices of order rn and n- m., respectively,
and in this case is
Using multiple values for >.it is possible by systematic application of Theorem 4.5.1
to determine disjunct intervals in which lie eigenvalues of matrix A. Thus, if we find
that -
based on Theorem 4.5.1 we have that in interval (>. 1, >. 2) .lies one only eigenvalue of
matrix A. Then for its determination the simple method of halving of interval (bisection
method) can be used, by contraction of this starting interval up to desired exactness.
For determination of intervals in which lie eigenvalues it can be used also theorem
of Gershgorin, so that those intervals are
we have
Let >. = 0. Then we have JJo(O) = 1, JJI(O) = 1, P2(0) = 2, p3(0) = 6, P4(0) = 24.
Thus, in the series (4.G.2) an~ + + + + +, what means that there ip no sign change, i.e.
Lesson IV - Eigenvalue Problems 59
:s(O). =. 0. ~c~orclirti? !-o Theorem 4.5.1, matrix A does not have negative eigenvalues,
1.e. 1t 1s pos1tlve-clefimte.
Taking in sequence for >. values 1, 2, 4, 5, 7, 9, 10 we get the results given in Table
4.5.1.
Table 4.5.1
). Po ( >.) P1 (>.) ]J2(>.) P3 ().) P4 ().) .s(>.)
1 1 0 -1 -4 -15 1
2 1 -1 -2 -2 8 2
4 1 -3 2 14 24 2
5 1 -4 7 Hi -31 3
7 1 -(:j 23 -22 -207 3
9 1 -8 47 -1G6 -111 3
10 1 -9 62 -274 264 4
Based on values of .s(>.) we conclude that in each interval (0, 1), (1, 2), (4, 5), (9, 10)
is located one eigenvalue of matrix A. These eigenvalues with six figures are .
(4.6.1)
Note that matrices Ak:+ 1 and AA: are similar, because they are connected with trans-
formation of similarity
(4.6.2)
wherefrom it follows
•
L(k)R(k) = L(k:- 1 ) Ak:R(k:- 1) = AL(k:- 1 )R(k:- 1 ).
what means that L(k)R(k) is factorization of matrix A k:. Using this facts, Rutishauser
(see, also [7]) showed that under certain conditions series of matrices { Ak} converges
towards some upper triangular matrix, which elements on th,e main diagonal give eigen-
values of matrix A. Usually, LR method is applied to matrices previously reduced to
upper Hessenberg form (aij = 0 for 'i ::::: ,j + 2). If, by means ofsome method, general
matrix reduced to lower Hessenberg form we apply LR method to transposed matrix,
which has the same eigenvalues. All matrices in series { Ak:} have Hessenberg form.
Acceleration of convergence of series {A~;:} can be done by convenient shifting Pk:, so
that, in place of A~;: we factorize Bk: = Ak: -p~;;l = L~;:Rk:, whereby Ak:+ 1 = p~;:I+RK:L~;:.
Unfortunately, LR algorithm has several disadvantages (see monograph of Wilkin-
son [7]). For example, factorization does not exist for every matrix. One better factor-
ization method was developed by J.G.F. Francis ([15]) and V.N. Kublanovskaya ([16]),
where matrix L is replaced with unitary matrix Q. So one gets QR algorithm defined
by .
(4.6.3)
(4.6.5)
Theorem 4.6.1. If matrix A reg,uhu·, then exists decomposition A = QR, where Q
is unita1y, and R upper triangular matrix. Moreover, if diagonal elements of matrix R
are positive, decomposition is unique.
QR factorization ( 4.6.3) can be performed by using unitary matrices of form I -
2wv)*. So, in order to transform Ak: to Rk:, i.e. reduction of columns to Ak:, we have
(4.6.6)
The matrix Qk: is then
(4.6.7)
QR algorithm is efficient if initial matrix has (upper) Hessenberg form. Then, pre-
viously mentioned unitary matrices reduce to two-dimensional rotations. All matrices
Ak: are of Hessenberg form. Thus, eigenvalue problem for general matrix is most con-
venient to be solved in two steps. At first, reduce matrix to Hessenbcrg form, and th(~n
apply the QR algmithm.
In special case, when initial matrix is tridiagonal, matrices Ak in QR algorithm
are also tridiagonal. In that case, using conveniPntly chosen shift Pk, QR algorithm
becomes very efficient for solving (~ip;<:~nvalue problem of tridiagonal matrices.
Similar to QR algorithm, it is developed QL algorithm ([18]), where L is lower
triangular matrix, and Q unitary matrix. Also, it has been developed so known implicit
QL algorithm ([19]).
• The direct method is not a good method for solving linear eigenproblems. However,
it can be used for solving nonlinear eigenproblems.
• For serious eigenproblems, the QR method is recommended.
• Eigenvectors corresponding to a known eigenvalue can be determined by one ap-
plication of the shifted inverse power method.
Almost all software routines in use nowadays trace their ancestry back to routines
published in Wilkinson and Reinsch's hoock Handbook for Automatic Compu-
tation, Vol. II, Linear Algebra [13]. A public-domain implementation of the
Handbook routines in FORTRAN is the EISPACK set of programs [3]. The routines pre-
sented in majority of most frequently used software packages are translations of either
the Handbook or EISPACK routines, so understanding these will take a lot of the way
towards understanding those canonical packages.
IMSL [4] and NAG [G] each provide proprietary implementations in FORTRAN of what
are essentially the Hand book routines.
Many commercial software packages contain eigenproblem solvers. Some of the
more prominent packages are Matlab and Mathcad. More sophisticated packages, such
as Mathernatica, Macsyrna, and Maple also contain eigenproblem solvers. The book
Numerical Recepies [2] contains subroutines and advice for solving eigenproblems.
A good "eigenpackage" will provide separate routines, or separate paths through
sequences of routines, for the following desired calculations
• all eigenvalues and no eigenv<~ctors
• all eigenvalues and some corresponding eigenvectors
• all eigenvalues and all corresponding eigenvectors.
The purpose of these distinctions is to save compute time and storage; it is wasteful
to calculate eigenvectors that you don't need. Often one is interested only in the eigen-
vectors corresponding to the largest few eigenvalues, or largest few in the magnitude,
or few that are negative. The method usually used to calculate "some" eigenvectors is
typically more efficient than calculating all eigenvectors if you desire fewer than about
a quarter of the <~igenvectors.
A good eigenpackage also provides separate paths for each of the above calculations
for each of the following special forms of the matrix:
• real, symmetric, tridiagonal
• real, symmetric, banded (only a small number of sub- and super-diagonals are
nonzero)
• real, symmetric
• real, nonsymmetric
• complex, Hermitian
• complex, non-Hermitian.
Again, the purpose of these distinctions is to save time and storage by using the
least general routine that will serve in any particular application.
Good routines for tht~ following paths are available:
• all eigenvalues ancl eigenvectors of a real, E;ymmetric, tridiagonal matrix
• all eigenvalues and eigenvectors of a real, symmetric, matrix
• all eigenvalues and eigenvectors of a complex, Hermitian matrix
• all eigenvalues and no eigenvectors of a real, nonsymmetric matrix.
(4.8.1) A· x= >.B · x
where CA and B are both matrices. Most such problems, where B is nonsingular, can
be handled by the equivalent
(4.8.2)
62 Numerical MetlwclH in Computational Engineering
where
(4.8.4)
The matrix C is symmetric aucl its eigenvalues are the same as those of the original
problem (4.8.1); its <~ig<~nfnnd.ious an~ LT · :Z. The efficient way to form C is first to
solve the equation '
(4.8.5)
(4.8.G) L·C=Y
[ ()
-A - l · C
I
-A- 1 -B ]
.-·]
).
/.
[ if
= .r.' .
A [~]
:1/.
[1] J\!Iilovanovi(:. G.V .. Numer·ical Analysis I, Nau(:na kn.iiga, BeognHl, 1988 (Ser-
bian).
[2] Press, W.H., Flmmcry, B.P., Tenkolsky, S.A., and Vetterling, W.T., Numer·ical
Recepies - The Art of Scient~fic Computing. Cambridge Uuiw~rsity Press.
Hl89.
[3] Smith, B.T., <~t al., Matr·i:r: E-igensystem Routines - EISPACK Ou:idc,
2nd ed., val 6 of Lect·uTe Notes in Compv,ter Science. Spring<~L N<~w
York, 197G.
[4] IMSL M ath/L-ibTary Users M an·ual , IMSL Inc., 2500 City \i\Tt~st BonlcvanL
Honston TX 77042
[5] NAG Fortran L-ibrary, Nmncrical Alp;mitlnns Gronp, 25G Banlmry Road, Ox-
fonl OX27DK U.K., Clmpt<~r F02.
[G] Golnh, G.H., awl Van Loan, c.·F., Matri:E Computation, Johns Hopkins Uni-
V<~rsity Press, Baltimm<~ 191)9.
[7] Wilkinsou, The Algebraic E·igenvalue Problem, Oxford Univc~rsity Pn~ss, N<~w
Ymk, 19GG.
[IS] Acton, F.S .. Nume·r-ical Methods that Work, corrected edition, Matlwmatical
Association of Auwrica. Clw.ptcr B . Washiup;ton, 1970.
Lesson IV - Eigenvalue Problems 63
[9] Horn, R.A., <urd Johnson, C.R.., Matrix Analysis, Cambridge University Press,
Cambridge, 1985.
[10] Milovanovi<':, G.V. and Djonljevic, Dj.R., Programiranje numerickih metoda
na FORTRAN jeziku. Institut za dokumentaciju zastite na radu "Edvard
Kardelj", Nis, 1981 (Serbian).
[11] Parlett, B.N. ancl Poole, W.G., A geometric theory for the QR, LU, and
power iterations. SIAM .J. Numer. Anal. 10(1973), 389-412.
[12] Bart, W., Martin, R.S., Wilkom;on, .J.H. Calculation of the eigenvalues of
a symmetric tridiagonal matrix by the bisection method. Numer Math.
9(1967), 386-393.
[13] Wilkinson, .J.H. & Reisch, C., Handbook for Automatic Computation. Vol.
II Linear Algebra. Springer Verlag, Berlin-Heidelberg-New York, 1971.
[14] Rutishauser, H., Solution of eigenvalue problem with the LR-transfor-
mation. Appl. Math. Ser. Nat. Bur. Stand. 49(1958), 47-81.
[15] Francis, J.G.F., The QR transformation - a unitary analogue to the LR
transformat·ion. Compnt. J. 4(1961/62), 2G5-271, 332-345.
[16] Kublanovskaya, V.N., 0 nekotoryh algori.fmah dlja resenija polno{ prob-
lemy sobstvennyh znaceni{ Z. VyCisl. Mat. i Mat. Fiz. 1(1961), 555-570.
[17] Golub, G.H. & Welsch, .J.H., Calculation of Gauss quadrature rules. Math.
Comp. 23(1969), 221-230.
[18] Bowdler, H., Martin, R.S., Reinsch, C. Wilkinson, .J.H., The QR and QL algo-
rithms for symmetric matrices. Nmner. Math. 11(1968), 293-306.
[19] Dubrulle, A., Martin, R.S., Wilkinson, .J.H., The implicit QR algorithm. Nu-
mer. Math. 12(1968), 377-383.
[20] Stoer, J., and Bulirsch, R., Introduction to Numerical Analysis, Springer,
New York, 1980.
[21] Mitrinovic, D.S. and Djokovi(:, D.Z., Polinomi i matrice. ICS, Beograd, 1975
(Serbian). .
[22] Hoffman, .J.D., Numerical Methods for Engineers and Scientists. Taylor
& Francis, Boca Raton-London-New York-Singapore, 2001.
Faculty of Civil Engineering Faculty of Civil Engineering ancl Architecture
Belgrade '- Nis
Master Study Doctoral Study
COMPUTATIONAL ENGINEERING
LECTURES
LESSON V
5.1.0. Introduction
We consider that most basic of tasks, solving equations numerically. While most
equations are born with both a right-hand side and a left-hand side, one traditionally
moves all terms to the left, leaving
(5.1.0.1) .f(x) = 0
whose solution or solutions are desired. When there is only one independent variable,
the problem is one-dimensional, namely to find the root or roots of a function. Figure
5.1.0.1 illustrates the problem graphically.
f(x)
Figure 5.1.0.1
With more than one independent variable, more than one equation can be satisfied
simultaneously. You likely once learned the implicit function theorem which (in this
context) gives us the hope of satisfying n equations in n unknowns simultaneously.
Note that we have only hope, not certainty. A nonlinear set of equations may have no
(real) solutions at all. Contrariwise, it may have more than one solution. The implicit
function theorem tells us that generically the solutions will be distinct, pointlike, and
separated from eetch other. But, because of nongeneric, i.e., degenerate, case, one can
get a continuous family of solutions. In vector notation, we want to find one or more
n-dimensional solution vectors .if such that
(5.1.0.2)
.f
where is the n-dimensional vector-valued function whose components are the individ-
ual equations to be satisfied simultaneously. Simultaneous solution of equations in n
dimensions is much more difficult than finding roots in the one-dimensional case. The
principal difference between one and many dimensions is that, in one dimension, it is
65
66 Numerical Methods in Computational Engineering
possible to bracket or "trap" a root between bracketing values, and thei1 find it out di-
rectly. In multidimensionR, you can never be sure that the root is there at all until you
have found it. Except in linear problems, root finding invariably proceeds by iteration,
and this is equally true in one or in many dimensions. Starting from some approximate
trial solution, a useful algorithm will improve the solution until some predetermined
convergence criterion is satisfied. For smoothly varying functions, good algorithms will
always converge, provided that the initial guess is good enough. Indeed one can even
determine in advance the rate of convergence of most algorithms. It cannot bP overem-
phasized, however, how crucially success depends on having a good first g·uess for the
solution, especially for multidimensional problems. This crucial beginning usually de-
pends on analysis rather than numerics. Carefully crafted initial estimates reward you
not only with reduced computational effort, but also with understanding and increased
self-esteem. Hammings motto, "the purpose of computing is insight, not numbers,"
is particularly apt in the area of finding roots. One should repeat this motto aloud
whenever program converges, with ten-digit accuracy, to the wrong root of a problem,
or whenever it faib to converge because there is actually no root, or because there is
a root but initial estimate was not sufficiently close• to it. For one-dimensional root
finding, it is possible to give some straightforward answers:. You should try to get some
idea of what your functiou looks like before trying to find its roots. If you need to
mass-produce roots for many different functions, then you should at least know what
some typical members of the ensemble look like. Next, you should always bracket a root,
that is, know that the function changes sign in an. identified interval, before trying to
converge to the roots value. Finally, one should never let iteration method get outside
of the best bracketing bounds obtained at any stage. We can· see that some pedagogi-
cally important algorithms, such as secant method or Newton-Raphson, can violat<-~ this
last constraint, and are thus not recommended unless certain fixups are implemented.
Multiple roots, or very close roots, are a real problem, especially if the multiplicity is
au even number. In that case, tlwr<~ may b<~ no readily apparent sign change in the
function, so the notion of bracketi1ig a root and maintaining tlu~ bracket becomes diffi-
cult. We nc~vertheless insist on bracketing a root, even if it takes the minimum-searching
techniques to determine whether a tantalizing dip in the function really does cross zero
or not. As usual, we want to discourage the reader from using routines as black boxes
without understanding them.
f{x)f . f{~ ~
1
(a)
+--+----:~ (b)
f(x)
f(x)l \ . ;
(c)
~ (d)
f~x) 11
·. -rbx)
a 1 =~~:~=a, X
a 1 =~ x
(e) (f)
1 1 (\ ~X)r !"""'\ 1
~ ~
(g) (h)
Figmc G.1.0.2
Nonlin<~ar equations can hchav<~ in various ways in tlw vicillity of a root. Algc~hraic
<m<l transc<~lHl<~utal <~qnations may haw~ distinct. (i.e. simple) real )·oots, repeated (i.e.
Lesson V- Nonlinear Equations and Systems ()7
multiple) real roots, or complex roots. Polynomials may have real or complex roots.
If the polynomial coefficients are all real, complex roots occnr in conjugate pairs. If
the polynomial coefficients are complex, single complex roots can occur. Figure 5.1.0.2
illustrates several distinct types of behavior of nonlinear equations in the vicinity of a
root. (a) illustrates the case of a single real root, called simple root. (b) illustrates a
case where no real roots exist. Complex roots may exist in such a case. Two and three
simple roots are showed on (c) and (d), respectively. Two and three multiple roots are
illustrated on (e) and (f), respectively. A case with one simple root and two multiple
roots is given in (g), and in (h) is illustrated the general case with any number of simple
and multiple roots.
There are two distinct phases in finding the roots of nonlinear equation (see [2],
pp. 130-135):
(1) Bounding the solution, and
(2) Refining the solution.
In general, nonlinear equations can behave in many different ways in the vicinity
of a root. ·
(1) Bounding the solution
Bounding the solution involves finding a rough estimate of the solution that can
be used as the initial approximation, or the starting point, in a systematic procedure
that refines the solution to a specified tolerance in an efficient manner. If possible, it
is desirable to bracket the root between two points at which the value of the nonlinear
function has opposite signs. The bounding procedures can be:
1. Drafting the function,
2. Incremental search,
3. Previous experience or similar problem,
4. Solution of a simplified approximate model.
Drafting the function involves plotting the nonlinear function over the range of
interest. Spreadsheets generally have graphing capabilities, as does Mathematica,
Matlab and Mathcad. The resolution of the plots is generally not precise enough for
accurate result. However, they are accurate enough to bound the solution. The plot
of the nonlinear function displays the behavior of nonlinear equation and gives view of
scope of problem.
An incren1ental search is conducted by starting at one end of the region of interest
and evaluating the nonlinear function with small increments across the region. When
the value of the fimction changes the sign, it is assumed that a root lies in that interval.
Two end points of the interval containing the root can be used as initial guesses for a
refining method (second phase of solution). If multiple roots are suspected, one has to
check for sigh changes in·the derivative of the function between the ends of the interval.
they use information about the nonlir1ear function itself to come closer with estimation
of the root. Thus, they are much more efficient than bracketing methods.
(G.1.1.1)
when~ ~· = :~: 0 + fJ(o,- :r: 0 ) (0 < 0 < 1). Having in mind that f(a) = 0. by nt:glect.ing
rlast member on t.lw right-hand side of (5.1.1.1), we get
""--' f(:J:o)
o, = :ro - . ( '!' )
' II '·()
L!~sson V - Nonlinear Equations and Systems (i9
Here :1: 1 represents the abscissa of intersection of tangent on the curve :y = f(:r) in the
point (xo, f(:r 0 )) with :~:-axis (see Figure 5.1.1.1).
y=f(x) y=f(x)
X
0
(5.1.1.3)
by differentiation we get
1
• f'(:r: 2 ) - f(.T)f"(:r:) f(:r)f"(x)
(5.1.1.4) cp (:£:) = 1- f1(:J:)2- .f1(x)2
Note that cp(a) = a and (p 1 (a) = 0. Being, based on accepted assumptions for .f,
functi9n cf/ continuous on [cv., (J], and cf/ (a) = 0, there exist,13 a neighborhood of point
x = a, denoted as U (a) where it holds
1 l.f(x)f"(x) I
(5.1.1.5) lc/J (:DJI = .f'(:I:)2 S q < 1.
Theorem 5.1.1.1. If :z: 0 E U(a), series {xk} generated using (5.1.1.3) converges to
point x = a, whereby
. .'Dk:+l- a .f"(a)
(5.1.1.6) lnn (.'Dk: - a, ) 2 = ,2, .1 ( a, ) •
k:-t+= 1
(see [1], pp. 340-341).
70 Numerical Methods in Computational Engineering
Example 5.1.1.1.
Find the solution of equation
Note that .f' (2:) = 1 +sin :c > 0(\i:c E [0, 1r /2]). Starting with ::r 0 = 1, we get the results
given in Table 5.1.1. -
•
Table 5.1.1
'k 1:,,
0 1.000000
1 0.750364
2 0.739133
3 0.739085
4 0.739085
The lcist two iterations give solution of equation in consideration with six exact
figures.
Example 5.1.1.2.
By applying thP Newton's method on solution of equation .f(:r) = x'~~- o. = 0 (o. >
> 1) we obtain the iterative formula for determination of n-th root of positive
0, n
number a
(k = 0, 1, ... ).
The case .f"(a) is specially to be analyzed. Namely, if we suppose that .f E C 3 [n,j3] one
can prove that
. :r:/,:+1- a f"'(a)
. lnn . .
1•. --t+oo (.r.,., - a) 3 :3.f' (o.) .
Example 5.1.1.3.
Consider the !~quatiou
.f(:r:) = :r:
3
- 3:I: 2 + 4:J:- 2 = 0.
L<~ssm1 V- Nonliw~ar Eqnations and Systems 71
B<~cans<-~ of .f(O) = -2 awl f(1.5) = O.G25 w<~ conclncle that on seguwnt [0. 1.5] this
equation has a mot. Ou the otlwr hand, .f' (:r:) = ::b: 2 - G:r: + 4 = 3(:r:- 1) 2 + 1 > 0, what
means that. tlw root is simple, enabling application of Newton's method. Starting with
:1: 0 = 1.5, we get the n~sults in Table 5.1.2.
Table 5.1.2
k :r,,,
() 1.5000000
1 1.1428571
2 1.0054!)44
:3 1. 0000003
The exact vahw fm root is a= 1, br.canse .f(:1:) = (:r:- 1):3 + (:r:- 1).
In mder to n~<lncc umnh<~r of caknlations, it is often used th<~ following modification
of N <-~wton method
f (:r:/,:)
:1:/,:+1 = :r1.,- /''(·. ) (k = 0, 1, ... ).
. ./.()
Geometrically, :r: 1,,+ 1 n~prescnts abscissa of intersection of :r:-axes a.nd straight line passing
through point (:r 1,, f(:r:,J) and being paralld to ta.ngent of cnrvi-~ y = f(:r:) in the point
(:r:o, .f(:r:o)) (see Figme 5.1.1.2).
ltcrativ<~ function of such mo< lified Newton's method is
f' (a) .
Because of qJ 1 (a) = a awl (/>~(o.) = 1- ·.,( ) , w<-~ conclude that method has order of
.f :r:o
convergence one, i.e. it holds
ferencc
f (r: A:) - .f (:r: 1.:-1) one gd.s secant method
:r: /,: - :r: /,:- 1
which belongs to open domains methods (two steps method). For starting of iterative
process (5.1.1. 7) two initial values :r 0 and :r: 1 are needed. Geometrical interpretation of
secant method is giv<~n in Figure 5.1.3.1.
72 Numerical Methods in Computational Engineering
y y
X X
(5.1.1.8) (k = 1, 2, ... ).
This method is often called regula falsi or false position method. Differently from
secant method, where is enough to take .1: 1 # :r: 0 , at this method one needs to take
.1:1 and :J:o on different sides of ro()t :r: = a. Geometric interpretation of false position
method i8 given in Figure 5.1.3.2.
where f E C[a, fJ]. Metho<l of interval bisection for Holution of equation (5.1.2.1) couHists
in com;truction of series' of iutervals { (:c A:, !JJ,:)} I.: EN such that
1
:UA:+1- :rl.:+l = 2(:1J~.:- :r:~.:), (k=1,2, ... )
having thereby lim :r1.: = lim 'lJA: = a. The noted process of construction of intervals
1.:--+ +oo 1.:--++oo
is interrupted when, for example, interval length becomes lesser than in advance given
small positive nmnhcr E. This method can be cleHcribecl with fom steps:
I. k := 0, :~: 1 = o~, y 1 = {3:
II. k := k + 1, ZJ.: := !(:r:J.: + yk);
III. If
.f(zJ.:)f(:r,,:) < 0 take :r~.:+ 1 := :r:A:, :t/k:+1 := ZJ.:,
> 0 take :r/.:+1 := z~.:, :UA:+1 := :U~.:,
= 0 talu! a := z1.:; encl of calculation
IV. If
go to II,
1
<E ZJ.:+ l := 2(:r: !.~+ 1 + :tJI,:+ 1)
end of calculation.
Lesson V - Nonlinear Equations and Systems
using Newton's nwtho<l, with accuracy c = 10- 5 . For initial approximation take :£: 0 =
0.5. On output print value of paranwter a, root :c and corresponding value of f(:r:).
Crit<~rion for interrupting iterative~ process is accuracy E. Namely, we consider th-~ root
to 1Je found with accuracy c if f (:r:) changes the sign in interval (:rn - c, :Dn + c). The
program and output list arc of form:
C========================================================
C SOLVING EQUATION
C 1 - X**(-A) + (1-X)**(-A) = 0
C BY NEWTON'S METHOD
C========================================================
FUNK(X,A)= 1 - X**(-A) + (1-X)**(-A)
PRIZ(X,A)= A*X**(-A-1) + A~(1-X)**(-A-1)
OPEN(6,File='NEWT1.out') .
WRITE(6,10)
10 FORMAT(10X,'A-', 10X, 'X', 12X, 'F(X)'/)
EPS=1.E-5
DO 11 I=5,28
A=I*0.1
X0=0.5
6 X=XO-FUNK(XO,A)/PRIZ(XO,A)
IF(FUNK(X+EPS,A)*FUNK(X-EPS,A).LT.O.) GO TO 7
XO=X
GO TO 6
7 Y=FUNK(X,A)
WRITE(6,20)A,X,Y
20 FORMAT(9X, F3.1, 5X, F9.6, 5X, F9.6)
11 CONTINUE
STOP
END
and the output list of results is
A X F(X)
.5 .219949 -.000014
.6 .267609 .000000
.7 .305916 -.000026
.8 .336722 -.000003
.9 .361641 .000000
1.0 .381966 .000000
1.1 .398689 .000000
1.2 .412563 .000000
1.3 . 424159- -.000084
74 Numerical Methods in Computational Engineering
K=K+1
FZ=F(Z)
7 WRITE(6,20) K,X,Y,FZ
6 WRITE(6,30) Z,EPS
30 FORMAT(/5X, 'A= ', D20.13,' (WITH EXACTNESS EPS =
1 D7. 1, ') ')
STOP
END
and the output list of results is
K ( X(K) Y(K) ) F(Z(K))
0 ( -.5000000000000D+OO,' .1000000000000D+01) .15903D+OO
5 ( .2031250000000D+OO, .2500000000000D+OO) .57870D-01
10 ( .2119140625000D+OO, .2133789062500D+OO) -.29038D-02
15 ( .2132873535156D+OO, .2133331298828D+OO) .70475D-05
20 ( .2133073806763D+OO, .2133088111877D+OO) -.23607D-05
25 ( .2133086323738D+OO, .2133086770773D+OO) .89352D-07
30 ( .2133086337708D+OO, .2133086351678D+OO) .53733D-09
35 ( .2133086343383D+OO, .2133086343820D+OO) .58801D-10
40 ( .2133086343465D+OO, .2133086343479D+OO) .19761D-11
41 ( .2133086343465D+OO, .2133086343472D+OO) .48077D-12
A = .2133086343468D+OO (WITH EXACTNESS EPS = .1D-11)
Example 5.1.3.3.
Write a program for solving nonlinear equation .f (x) = 0 by regula-falsi method
(i = 2,3, ... ).
Iterative process interrupt when the condition f(:J:,;,- c)f(x,i, +c) ~ 0 is fulfilled. For
program testing use the following example:
END
c
c
SUBROUTINE NEWT(A,B,N,Z1)
DIMENSION A(1), B(1)
C EVALUATION OF COEFFICIENTS P'(Z)
DO 5 I=1,3
5 B(I)=A(I)*(4-I)
C EVALUATION OF REAL ROOT Z(1)
ZO=O.
10 Zl=ZO-PL(ZO,A,N)/PL(ZO,B,N-1)
IF(ABS(Z1-Z0)-1.E-7) 20,15,15
15 ZO=Z1 ·
GO TO 10
C EVALUATION OF COEFFICIENTS Q(Z)
20 B(1)= A(1)
DO 25 I=2,3
25 B(I)=A(I)+B(I-1)*Z1
RETURN
END
For solving of square equation Q(z) = az 2 + bz + c = 0 we formed subprogram KJ.
Arguments in parameter list of subprogram an~ of following meaning: ·
A, B, C - coefficients of equation;
X1, Y1 - real and imaginary part of the first root of equation;
X2 , Y2 - real and imaginary part of the second root of equation.
For algorithm steps 1° and 2° the subroutine NEWT is written, with following argu-
ments:
A - coefficients of polynomial P;
B - coefficients of polynomial P' and Q;
N - degree of polynomial P (N = 3);
Z1 - real root of equation P(z) = 0 obtained by Newton's method.
Main program and output list of results are of following form:
C========================================================
C SOLVING NONLINEAR EQUATION
C OF DEGREE THREE
C========================================================
DIMENSION A(4), B(3), ZR(3), ZI(3)
OPEN(6,File='POL.OUT')
OPEN(8,File='POL.IN')
5 READ(8,10,END=99)(A(I),I=1,4)
10 FORMAT(4F10.0)
IF(A(1)) 15,99,15
15 CALL NEWT(A,B,3,Z1)
ZR(1) =Z1
ZI(1)=0.
WRITE(6,20) (I,A(I), I=1,4)
20 .FORMAT(/ 22X, 'COEFFICIENTS OF POLYNOMIAL P(Z)'// 5X,
*4 (l A(' , I 1, ') =' , F8. 5, 3X) / /)
WRITE (6, 25) (I, B(I), I=1, 3)
25 FORMAT(/23X,'COEFFICIENTS OF POLYNOMIAL Q(Z)'//5X,
*3('B(' ,I1, ')=.' ,F8.5,3X)// )
WRITE(6,30)
30 FORMAT(/23X, ' ZEROS OF POLYNOMIAL P(Z)'//27X,
*'REAL',8X,'IMAG'/ .)
CALL KJ(B(1),B(2),B(3),ZR(2),ZI(2),ZR(3),ZI(3))
WRITE(6,35) (I,ZR(I) ,ZI(I) ,I=1,3)
35 FORMAT(/18X,'Z(',I1, ')=', 2F12.7)
GO TO 5
78 Numerical Methods in Computational Engineering
99 STOP
END
COEFFICIENTS OF POLYNOMIAL P(Z)
A(1)= 3.00000 A(2)=-7.00000 A(3)= 8.00000 A(4)=-2.00000
COEFFICIENTS OF POLYNOMIAL Q(Z)
B(1)= 3.00000 B(2)=-6.00000 B(3)= 6.00000
ZEROS OF POLYNOMIAL P(Z)
REAL IMAG
Z(1)= .3333333 .0000000
Z(2)= 1.0000000 1.0000000
Z(3)= 1.0000000 -1.0000000
COEFFICIENTS OF POLYNOMIAL P(Z)
A(1)= 1.00000 A(2)=-5.00000 A(3)=-1.00000 A(4)= 5.00000
COEFFICIENTS OF POLYNOMIAL Q(Z)
B(1)= 1.00000 B(2)= .00000 B(3)=-1.00000
ZEROS OF POLYNOMIAL P(Z)
REAL IMAG
Z(1)= 5.0000000 .0000000
Z(2)= 1.0000000 .0000000
Z(3)= -1.0000000 .0000000
Example 5.1.3.5.
Write <1. program fm <~valuation of coefficients of polynomial of form
if all zeros ZA: = :r~.: + 'i /JA: (/;: = L ... , n) are known.
Let
·i.=l
Then for polynomial P(z) it l1ol<ls P(z) = Pn(z), i.e. C;. = c;n) ('i = 1, ... , n + 1).
B<~cans<·~ of
P,,:(z) = (z- z,JP,,._ 1 (z)
the following recmTence rdations hold
C 1(l.:) -
-z,,: '1
-
- C'(l.:-1)
'
C i(!..l -- c'U···-1)
'·i.-l -
-
Z/,: 'i
cu.:-1) ('i=2, ... ,1.:),
cu.: l = cu·- 1l.
/,:+1 '··
Example 5.1.3.6.
· Write a program for evaluation of complex root of transcendental equation f (z) = 0
using Newton's method · .
Zn +1 = Zn - l.f1(zn)
( )
( 1
.f (Z.
11.) =/::- 0
)
1
. Zn
XS=XO
YS=YO
CALL TRANS(XO,YO,A,B,R)
IF(R) 25,25,35
25 WRITE (6,30)
30 FORMAT(//5X,'FIRST DERIVATIVE OF FUNCTION= 0')
GO TO 50
35 IF(ABS(XO-XS)-EPS) 40,40,15
40 IF(ABS(YS-YO)-EPS) 45,45,15
45 KBR=2
GO TO 15
50 WRITE(6,55)EPS
55 FORMAT(/5X,'SPECIFIED ACCURACY OF CALCULATION '
*'EPSYLON = ',E7.1)
STOP
END
c
c
FUNCTION EF(X,Y,I)
GO T0(10,20,30,40),I
10 EF=EXP(X)*COS(Y)-0.2*X+1
RETURN
20 EF=EXP(X)*SIN(Y)-0.2*Y
RETURN
30 EF=EXP(X)*COS(Y)-0.2
RETURN
40 EF=EXP(X)*SIN(Y)
RETURN
END
c
c
SUBROUTINE TRANS(XO,YO,A,B,R)
C=EF(XO,Y0,3)
D=EF(XO,Y0,4)
R=C*C+D*D
IF(R) 5,10,5
5 XO=XO-(A*C-B*D)/R
YO=YO-(B*C-A*D)/R
10 RETURN
END
NEWTON'S METHOD FOR SOLVING TRANSCENDENT EQUATION
F(Z)=EXP(~) - 0.2*Z + 1 = 0
ITER.No. REAL(Z) IMAG(Z) REAL(F(Z)) IMAG(F(Z))
0 1.0000000 3.1415920 -1.918282 -.628317
1 .3426673 2.9262880 -.444708 -.284296
.2 .0372190 2.7002840 .054076 -.096737
3 .0497327 2.6425620 .067235 -.025535
4 .0911207 2.6459620 .018186 -.008234
5 .1015006 2.6458960 .006090 -.002721
6 .1049549 2.6459040 .002026 -.000909
7 .1060995 2.645904.0 .000678 -.000304
8 .1064820 2.6459040 .000227 -.000102
9 .1066103 2.6459040 .000076 -.000034
10 .1066533 2.6459040 .000026 -.000011
11 .1066677 2.6459040 .000009 -.000004
12 .1066726 2.6459040 .000003 -.000001
13 .1066742 2.6459040 .000001 .000000
14 . 106674 7 2. 6459040 . 000000 . .000000
82 Num<~rical Methods in Computational Engineering
C========================================================
C EVALUATION OF COMPLEX ROOT OF TRANSCENDENT
C EQUATION F(Z)=O BY NEWTON 1 S METHOD
C USING COMPLEX ARITHMETIC
C========================================================
COMPLEX Z,ZO,F,Y,A
OPEN(6,File='NEWT-TRC.OUT')
OPEN(8,File='NEWT-TRC.IN')
READ(8,10) ZO
10 FORMAT(2E14.7)
EPS=1. E-6
WRITE(6,20)
20 FORMAT(//10X, 'NEWTON''S METHOD FOR SOLVING TRANSCEN'
*'DENT EQUATION'//18X,'F(Z)=EXP(Z) ~ 0.2*Z + 1 = 0'
*//5X,'ITER.No. ',4x,'REAL(Z)',5X,'IMAG(Z)',4X,
*'REAL(F(Z)) ',2X,'IMAG(F(Z))'/)
ITER=O
Y=F(Z0,1)
13 WRITE(6,30)ITER,ZO,Y
30 FORMAT(5X,I4,2X,2F13.7,2F12.6)
Y=F(Z0,2)
B=CABS(Y)
IF(B.EQ.O.) GO TO 99
CALL NEW(Z,ZO)
ITER=ITER+1
Y=F(Z,1)
A=Z-ZO
IF(ABS(REAL(A)) .GT.EPS) GO TO 95
IF(ABS(AIMAG(A)).LE.EPS) GO TO 98
95 ZO=Z
GO TO 13
99 WRITE(6,40)
40 FORMAT(//5X,'FIRST DERIVATIVE OF FUNCTION= 0')
GO TO 97
98 WRITE(6,30) ITER,Z,Y
97 WRITE(6,55) EPS
55 FORMAT(/5X, 'SPECIFIED ACCURACY OF CALCULATION '
*'EPSYLON = ',E7.1)
STOP
END
c
c
COMPLEX FUNCTION F(Z,I)
COMPLEX Z
GO T0(1,2)I
1 F=CEXP(Z) - 0.2*Z + 1
RETURN
2 F=CEXP(Z) - 0.2
RETURN
END
c
c
SUBROUTINE NEW(Z,ZO)
COMPLEX Z,ZO,F
Z=zn - F(Z0,1)/F(Z0,2)
Lesson V - Nonlinear Equations and Systems 8:5
RETURN
END
NEWTON'S METHOD FOR SOLVING TRANSCENDENT EQUATION
F(Z)=EXP(Z) - 0.2*Z + 1 = 0
ITER. No. REAL(Z) IMAG(Z) REAL(F(Z)) IMAG(F(Z))
0 1.0000000 3.1415920 -1.918282 -.628317
1 .3426675 2.9262880 -.444709 -.284296
2 .1036775 2.7002840 -.023705 -.066273
3 .1054019 2.6458710 .001517 -.000634
4 .1066756 2.6459040 -.000001 .000000
5 .1066750 2.6459040 .000000 .000000
SPECIFIED ACCURACY OF CALCULATION EPSYLON = .1E-05
By taking .i! = [:1: 1 ... :r:n]T, (-} = [0 ... O]T, where (-} is null-vector, we can write
(5.2.1.2)
EJ f,; ( _ .1.1
' k:) )
to • (
. '/, a1, 0 0. ,an
) _ f" •(,
- . '/, .1.1
0 ( h:)
'0 0 0
0
,.1.n
( k) )
+ u:E1
~) 0,1
0 (
+ 0 0 0
where partial derivatives on the right-hand side of given equations are calculated in
point :£(k). T.~k:) represents corresponding remainder term in Taylor's formula.
Because of .f,;, ( a 1 , ... , an) = 0 ('i = 1, ... , n), previous system of equations can be
represented in matrix form
and
84 Nmnerical Metlwds in Computational Engineering
where .p(h:) = [,.ili:J ... ,..~~h:)f. If .Tac:ohiau matrix for .f is regular, then we have
By neglecting the very last member on tlH~ right-hand size, in place of of vector r7 we
get its new approximation, denoted with :l! (h:+l)_ In this way, one gets
(G.2.1.3)
where :J!(I.:) = [:r:i":) ... :r:g·')]r. This method is oft!~n called Newton-Raphson method.
Mdhod (G.2.1.3) can lH~ modific~~d in the sense~ that inverse matrix of VV(:i') is not
evahw.tc~cl at every step, hut only at first. Thus,
(5.2.1.4)
f.(·1·
.1·•1., ·1· 2 ) -
.. <l·J·
- ;; 2 ·r·
.. 1·•2 + 4·r·' '22 - 3G' :-
- 0
h(:t:l. :r:2) = l6:r~- :r:t + :r:2 + 1 = 0,
which has a. solHtion in first r1w1.dnwt (:r:1; :r2 > 0).
Using graphic pn~sent.at.ion of implicit functions .fi and h in first quadrant. on<~
can sc~e that solntion i7. is locate< l in tlw neighborhood of point ( 2, 1), so that w<~ tak<~
for initial vector :J!(O) = [2 l]T, i.e~. :t:~o) = 2 and :r~o) = 1.
By partial clcrivatiou of .fi ;t.ncl h one gds the .Jacohian
9:r:i + 8:1:2] .
32:r:2 +1 '
' '(1.: +1 ) -
./,1 -
' . ( /,;) -
·1'1
-
1- { (.) ')' .(1.:)
·J~.I")
+ ]-) t• 1(1.:) - ( 9'
.L1
'(/.:) ~ + °· 1 (1.:) ) t• (/;:) }
Q' .
·?
6,,, - . - . ')- ·
'
,. (/.: + 1)
·1·2
-
-
' . ( /,;) -
·1·2
_.!:_____ {
6
I ' . ( /.:) :1
-±./.1
t• (/,;) + 1o./.1
. 1
<..>' , (/,:) ' , (/.:) t• (/.;) }
.1.2 . 2 _ ·
/,;
Oelta(x1,x2);18*x1*x2*(32*x2+1)+4*x1**3*(9*x1**2+8*x2)
Open(1, File='Newt-Kant.out')
x10=2.d0
x20=1.d0
EPS=1.d-6
Iter=O
write (1, 5)
5 format(1h ,// 3x, 'i' ,7x, 'x1(i) ',9x, 'x2(i) ',
* 9x, 'f1(i) ', 9x, 'f2(i) '/)
write(1,10)Iter, x10,x20,F1(x10,x20),F2(x10,x20)
1 x11=x10-((32*x20+1)*f1(x10,x20)-(9*x10**2+8*x20)*
* f2(x10,x20)) /Oelta(x10,x20)
x21=x20-(4*x10**3*f1(x10,x20)+18*x10*x20*f2(x10,x20))
* /Oelta(x10,x20)
Iter=Iter+1
write(1,10)Iter, x11,x21,F1(x11,x21),F2(x11,x21)
10 Format(1x,i3, 4014.8,2x)
If(Oabs(x10-x11).lt.EPS.and.Oabs(x20-x21) .lt.EPS)stop
If(Iter.gt.100)Stop
x10=x11
x20=x21
go to 1
End
and the output list of results is
i x1(i) x2(i) f1(i) f2(i)
0 .200000000+01 .100000000+01 .400000000+01 .200000000+01
1 .198305080+01 .922958400+00 .731363450-01 .881108350-01
2 .198370710+01 .920743220+00-.286940530-04 .683484410-04
3 .198370870+01 .920742640+00-.103241860-10-.569948530-10
4 .198370870+01 .920742640+00 .000000000+00-.155431220-14
Example 5.2.1.2. vVrite p1·op,'l·am f(n· approximEI.tive solving a system of equations
F(:~:, y) = 0,
G(:r, y) = 0,
where F and G are contimzous differentiable iiznctions, using Newton-Raphson Jnetlwd
('n. = 0, 1, 2, ... )
J(:r:n, Yn) = # 0.
Partial derivathres are to be ol>tainerl mzmericaJly. The itern.tions inteiTupt when tlw
conditicms
j:r:n+l- :J:,nl s
E, and IYn+l- Ynl E, s
an! fizlfilled. t is accuracy given in advance. For test exn,mple take
F(:r,:y) = 2:r 3 - ;y 2 -1 =0
3
G(:r:, y) = :ry - y- 4 = 0,
(G.2.2.1) .run = o.
The gradient. mcthocl for solving a given system of equations is based ou minimi!';ation
of functional
'fl,
construct series {:G(A:)} tmch that U(:£( 0 )) > U(:i!( 1 )) > U(:f!( 2 )) > · · ·. In a same way as
at linear equationR, we take
(5.2.2.2) .-:(k+I)
,[, = ,(, , "U(·-:(k))
.-:(k) _ /\/;; v ,[, (k = 0, 1, ... ),
wherefrom we obtain
n
""""H·
6 '/., f'·(·-:(k))
'/. .(,
\ ·i.=1
(5.2.2.3) /\1.
"·
== t = - -n - - -
~H[
·i.=1
au _ _!!___{~i··(·-..)
• - ~ '
2} '/, .[,
_
-
?~f·-(.-..)af,(x)
~
.:...1 :J, • ' '/,
we have
(5.2.2.4)
1 (f-;'(A:) w.wrt-:'(k))
\ - . ' h. /;;.
/\f· - -. - -
· 2 (Wl.wr
"· h: .
t· (J,:) ' w,.wr
•. h: .
f' (A:l)
where [(h:) = /(:f(l.:)) and W 1,, = W(:f(l.:l). Finally, gradient method can be represented
in the form
(5.2.2.5)
Example 5.2.2.1.. System ofwmlinear eqw1.tion-; given in example 5.2.1.1 willl>f~ solved
using gmdient metlwrl, starti11g with tlw same i11itial vector :J:(O) = [2 1jT. giviw~ t;lw
following list of results · '
(5.2.2.6)
where Q(:Z) Q(:1: 1 , :~: 2 .... , :I:n) Dnc to simplicity, we will dcuotl~ q(h:) = Q(:Z(h:)), so
that gradient in point :J:(k) will be .
DQ(I.:) aQU•:)}
VQ('r(/.:)) -- horHl =
(5.2.2.7) ' . ' ,,
(1(·?(/;:))
' . • {- - ' ... ) -
!l,.
u.I.l
-
!l,.
u.l,n
Gradient vector VQ(T) is i11 CV(~l'Y point :Z(i.:) = (:r:~k:)), ... , :r:.~/.·')) normal to the
plane with constant value Q(f) awl go(~S tlmmgh given point. This vcctm has in ev(TY
point f(h:) orieutation of fast<~st grow of Q (:Z) fi:·om this point. Algorithm of gradil:nt
optimization methods consists in procedure that, starting from given or computed pomt
:Z(k), one goes to the next point :zU:+l) with step D.:Z(Ii:) in din~ction of gradient, clming
calculation of maximum
(5.2.2.8)
90 Numerical Methods in Computational Engineering
(5.2.2.9)
When given parameter of step j{(k) = (h~k)), i = 1, ... , n, move in direction of gradient
is realized by formulas ·
(k:)
(5.2.2.11) • (h:+l) - • (k:) (k:) fJQ
.E.i - .Ei -hi - - - (1: = 1, ... , n),
8 Xi
during finding minimum of function Q(x). In formulas (5.2.2.10) and (5.2.2.11) the
move is in direction of gradient only if all h.;k), (i = 1, ... , n) are same. Nevertheless, in
some methods are steps chosen arbitrary, or by some criteria.
At gradient methods the forrnulas with coordinates of normalized gradient vector
can be uses.
(5.2.2.13)
Thus, the following criterion for termination of gradient search may be used:
(5.2.2.14)
where c is given small uumber. For the same criterion can be used also Chebyshev
gradient norm
I fJQ(h:) I
(5.2.2.15) L--
n
·i.=l
fJ:J:
~c.
i
.
The exposed formulas (~uable writing a code in procedural and symbolic languages
for gradient methocls, what is snggc~sted to readers.
L<~ssml V - Nonliw~ar Equations and Systems ~1
(5.2.3.1) (k = 0, 1, 0 0 .)
where W is .Jacobian matrix. Tlw qnest.iou is how one should decide to accept tltr~
Newton step rh? If w<~ <lcnotr~ F = .{(:J!U')), a reasonable strat<~gy for step acceptance
is that IFI 2 = F · F <l<~cn~as<~s, what is the same requirement oJH~ would impose if trying
to minimize
1
t = -F·F.
o
(S.2.3.2)
0 2
Every solution of (5.2.1.1) miuimizes (5.2.3.2), but there may be some local minima
of (5.2.3.2) that an-~ not solution of (S.2.1.1). Thus, simply applying some minimum
finding algorithms can be wrong.
To develop a l><~t.ter strategy, note that Newton step (5.2.~~.1) is a descent direction
for .f:
3 3.)
r. ') ..
( o.~ \lf · r):J! =(F. W) · (- w- 1 . F)= -F. F < 0.
Thus, the strategy is qnite simple. One shonld first try the full Newton step. l H'<·ause
once we are close enough to the solntion, w<-~ will get quadratic convergence. lluwl~V<~r,
we should check at each iteration that tlw proposed step rwlnces f. If not, we go
back (backtrack) along tlw Newton direction until we get acceptable step. Because the
Newton direction is descent direction for f, we fill find for sure an acceptable step by
backtr a eking.
It is to mention that this strategy essentially minimizes f by taking Newton steps
determined in such a way that bring \l.f to zero. In spite of fact that this method can
occasionally lead to local minimnm -of .f, this is rather rare in practice. In such a case,
one should try a new starting point.
(5.2.3.4) (0<>.::;1)
The aim is to fiml /\.so that f(:l! 01 cJ+>.iJ) has decreased sufficiently. Until the early 1970s,
standard practice \\-;Is to choos<-~ >. so that :l!.,.,.,w exactly minintizes .f in t~h<-~ direction fl.
However, we now know that it is extremely wasteful of function evaluations to do so. A
better strategy is as follows: Sine<~ jJ is always the Newton direction in our algorithms, •
we first try ), = 1, the full Newton step. This will lead to quadratic convergence
·when :I! is sufficiently close to the solution~ . However, if .f(.i!nc·w) does not meet onr
92 Numerical Methods in Computational Engineering
acceptance criteria, we backtrack along the Newton direction, trying a smaller value
of A, until we find a suitable point. Since the Newton direction is a descent direction,
we are guaranteed to decrease of for sufficiently small A. What should the criterion for
accepting a step be? It is not sufficient to i·equire merely that f(xne-w) < f(iotd)· This
criterion can fail to converge to a minimum off in one of two ways. First, it is possible
to construct a sequence of steps satisfying this criterion with f decreasing too slowly
relative to the step lengths. Second, one can have a sequence where the step lengths
are too small relative to the initial rate of decrease of .f. A simple way to fix the first
problem is to require the average rate of decrease off to be at least some fraction a of
the initial rate of decrease \7 f · j)
(5.2.3.5)
Here the pararneter a satisfies 0 < a < 1. We. can get away with quite small values
of a; a = 10- 4 is a good choice. The second problem can be fixed by requiring the
rate of decrease of f at :Ene·w to be -greater than some fraction f3 of the rate of decrease
of f at Xold· In practice, we will not need to iinpose this second constraint because
our backtracking algorithm will have a built-in cutoff to avoid taking steps that are too
small.
Here is the strategy for a practical backtracking routine. Define
If we need to backtrack, then we model g with the most current information we have
and choose A to minimize the model. We start with g(O) and g'(O) available. The first
step is always the Newton step, A = 1. If this step is not acceptable, we have available
g(1) as well. We can therefore model g(A) as a quadratic:
A- - g'(O)
(5.2.3.9)
- 2[g(1)- g(O)- g'(O)].
Since the Newton step failed, we can show that >.,::; ~ for small a. We need to guard
against too small a value of A, however. We set Am.in = 0.1.
On second and subseqU<~nt backtracks, we model g as a cubic in A, using the previous
value g(A 1 ) and the second most recent value g(A2)·
(5.2.3.11)
-1/ A~]
AI/A~
[g(AI)- g'(O)AI- g(O)
. g(A2)- g'(O)A2- g(O) .
l
The minimum of the cubic (5.2.3.10) is at
A= -b + Jh 2 - 3ag'(O).
(5.2.3.12)
3a
Lesson V - Nonlinear Equations and Systems !J3
One should enfmce /that A lie be~tween Am 11.:1: = 0.5,\ 1 and Am·i.n = 0.1,\ 1 . The corn~
sponcling code iu FORTRAN is given in [5], pp. 378-381. It it suggested to reader to
write the c:orresponeling c:oele in Mathematica.
(5.2.3.14)
where r)F.;, = F.;,+ 1 - F.;,. Thi::; is generalization of the one-dimensional secant approxi-
mation to the derivative\ bF/rh. Howe~ver, e-~qnation (5.2.3.14) doe::; not determine-~ B.;,+ 1
uniquely in mon-~ than mw dime~n::;iou. Many different. auxiliary conditions to determine~
B.;,+ 1 have be~en examim~d, hut the be::;t one results from the Broyclen's formula. This
formula is based on idea of getting B.;+ 1 by making a least change to B.;, in accordn.JH:e
to the secant equation (5.2.3.14). Broyden gave the formula
(5.2.3.15)
Thus, insteael of solving eqnaticnt (5.2.3.1) hy, for example, LU decomposition, orw
determined
(5.2.3.17) · ()_:r:.,:
___, = - B-
·i.
1 l"
• < ·i
by matrix multiplication iu O(n 2 ) operations. Th1~ disadvantage of this method i::; t:lmt
it cannot b<~ easily emhexlde~el in a globally convergent strategy, for which the graclwnt
of equation (5.2.3.2) requires B, not B- 1
(5.2.3.11::1)
Accordingly, one should implement. the upcla.t.e formula iu the form (5.2.3.15). However,
we canstill preserve the O(n 2 ) solution of (5.2.3.1) by using QR decomposition of B.i+l
in O(n. 2 ) operations. Allm~ech~d is initial approximation B 0 to start process. It is ofteil
accepted to take identity matrix, awl tlwn allow O(n.) updates to produce a n~asonable
94 Numerical M<c~thocls in Computational Engineering
LECTURES
LESSON VI
6.1. Introduction
This lesson is devot<~d to one of the most important areas of theory of approxima-
tion - interpolation of func:tions. In addition to theoretical importance in constrnction
of nunH~ricalmetlw<ls for solving a lot of problems like numerical differentiation, nmtH~r
ical integration and simila.r, it has pmc:tical application to many engineering problems,
including FEM problems.
Th<-~ory of approximation deals with replacing of function .f ddiw~d on some S<~t X
by another function iD. Let function (]) depend on n + 1 parameter o.o, a1; ... , !J.n, i.<~.
(6.1.1)
whereby system of functions {(])A:} fulfills some given conditions. Linearity of function
<1? means linearity regarding pararneters a.i ('i = 0, 1, ... , n). When <I>A: = :r:k: (k =
0, 1, ... , n), i.e.
we have approximation by algebraic polynomials. In the cast~ when { <I>,j = { 1, cos :1:,
sin :r:, cos 2:r:, siu2:~:, ... } we h;-we approximation by trigonometric polynomials, i.<~.
trigonometric appl'oximation. For the case
'f (
<±> I >(:1:;
:1: ) = < c0 , fJn, ... , c.,., fJ,,. ) = co eb0 :r: + . . . + c.,...eb,:r; ,
where n +1= 2(r + 1), i.e. n = 2T+ 1.
97
98 Numerical Methods in Computational Engineering
2. Rational approximation
where n = T + s + 1.
Let function f be given on segment [a, b] by set of pairs (x~;;, fk:) (k = 0, 1, ... , n),
where f~;; .f(:r:k:)· If for approximation of function f by function <I? the criterion for
choice of parameters a 0 , a 1, ... , an is given by system of equations
we have problem of function interpolation. Function <!? is called in this case interpolation
function and points .T~;; (k = 0, 1, ... , n) interpolation nodes.
Problem of interpolation could be !ilore complicated then noted. More general case
appears when, in addition to function values in intei~polation nodes, the derivatives of
function are also included.
l[ l
1.e.
In order above given interpolation problem to have unique solution it is necessary that
matrix of system (6. 2.1) be regular.
To the system of functions (<!?A:) should be intruded such conditions under which
there not exists linear combination
which has n + 1 dif-ferent zeros on [a, b]. System of functions with such characteristic
are called Chebyshev (Tchebyshev) systems, or T-systems. There exists extraordinary
monograph regarding T -systems [6].
Theorem 6.2.1. If the fimctio11s <I? A: : [a, b] ---+ R (k = 0, 1, ... , n) are n + 1 tinws
differentiable and if fbr every k = 0, 1, ... , n the vVronsky determinant VVA: is cliff(~rent
from zero, i.e.
<Po (:r:) <!? 1(:r:) <!? k ( :J:)
<P;ll:) <!?~ (:r) <!?~; (:r:)
vv,,, = # 0,
<!? ~/'') (:r:) <!? i
k:) ( :r) ;r)(k)(··)
'± k: ,[,
6.3. Lagrange~interpolation
Let function f be given by its values .h = f(:I:k,) in points Xk: (:.rk: E [a, h]) (k
0, 1, ... , n). Without decn~asing the generality, assume
(6.3.1)
If we take points :1:h, for int.<~rpolation knots and put <D~.,(:r) = :r: 1·' (k = 0, 1, ... , n) we
have a probl<~m of int<~rpolation of function f by algebraic polynomial. Denote this
polynomial with P~~., i.e.
where
(:r:- :r:h:)w'(:J:h:)'
w(:r:) = (:r:- :ro)(:r- :r:1) ... (:r- :r:n),
w' (:rh:) = (:r: A: - :ro) ... (:D '·' - :r: h:-1) (:r: k: - :r: k+l) ... (:r: h: - :1:.,.).
The formula (6.3.2) is called Lagrange inkrpolation formula, and polynomial P~~. La-
grange interpolation polynomial. ~Then programming, it is suggested to use the follow-
ing form of Lagrang<-~ fornmla.
;r: - .7:.;
P,,(:r:) = n _( .f(:r:~;,) IJ,.
L_ n _J,,,'~- )
. .L 1.:
k:=O .,.=n
-;,p,,
1 :r:o :r:()'
1 :r:l :r'1
=II (:r.;. -
:r.i)'
·i.>j
1 :r:n :~:;~
and the assumption (6.3.1), it follows that the Lagrange polynomial ( 6.3.2) is unique.
Example 6.3.1. For hmc:tion wr.lues given in tabular form find tlle Lagrange interpo-
lation polynomiaL
:r: k: .f (:r:k)
-1 -1.
0 2.
2 10.
3 35.
100 Numerical Methods in Computational Engineering
is callc~d clividc~cl llif-fercuu~ of first. order (of hmc:tion fin points :r:o and :1:1) and denotc~cl
as [:r:o, :r:1; .f].
Dividecl clitfcrenc:e of order T are clefim~cl recursively by
Om~ call show that dividc~cl cliffcn~ncc of mclc~r .,. has characteristic: of li11earity. i.e.
L<~sson VI - Approximatiou and Interpolatiou lOl
=II· . I
1 tl t.,._l
[:ro, :r:1, ... , :r:.,.; .f] f('r)(:r:o + t(:r:- :r:.,_I)t.;,)rlt1 dt2 ... (it;.,.
() 0 0 '1,=1
II· . I
1 tl t-.. -1
Taking :r.;, ----t :.r:o ('i = 1, ... , ·r) in last equality, we get
·r· '
·f]----'" 1 '"
(6.4.2) '!' 'I'
[ ·•0,··1.,···,•"/'). ---, -,. f(r)(·r·)
··0.
'f'.
Let ns express now value of function f (:r:.,.) (7' :::; n) by means of divided differences
[:r:o, ... , :r:.;,; f] (·i = 0, 1, ... , T).
ForT= 1, based on definition (6.4.1), we h<we
f(.T.,.) = f(:r:o) + (:r:j.- :I:o)[:ro, :r:1; f] + (:r:-r- :ro)(:x:j.- x1)[:r:o, :x:1, x2; f]
+ ... + (:r:.,. - :r 0) (x :.r:1) ... (x.,. - :x:.,._I)[:Eo, x1, ... , :r:-r; .f].
1 . ,......
102 N llllleric:al Metlw<ls in Computational Engineering
Using divided differences for sc-~t of cla.ta (:r:h:, f(:r:A:)) (A: = 0, ... , n) the interpolation
polynomial of the following form can be con:-:tructed.
Pn(:r:) = .f(:ro) + (:r:- :ro)[:r:o, :r:1; f] + (:r:- :r:o)(:r:- :r:l)[:r:o, :r:1, :r:2; .f]
+ ... + (:r- :r:o)(:r:- :r:I) ... (:r- :r:.n-l)[:r:n, :r1, · · ·, :I;n: f].
ThiR polynomial iR calle<l Newton·:-: iut<~rpolation polynomial.
Having in mind 1miqn<~1w:-:s of algebraic interpolation polynomial, we conch1<lr-~ that
Newton's interpolation polynomial is equivalent to Lagrange polynomial.. Note that
construction of N ewtou's int<~rpolation polynomial clemandR previous forming of ta.bl<~
of divided difh~n~uces, what was not the case with Lagrange polynomial. On We other
lmud, involving a new interpolation node in order to .reduce interpolation error, is mme
convenient with Newton':-: polynomial, because do not demand repeating of whole cal-
culation. Namely, at Newton's interpolation we have
Pn+1 (:r:) = P, (:~:) + (:r:- :ro) (:r:- :rl) ... (:r:- :r.,)[:r:o, :r:1; ... , :rn+1: .f].
If w<~ pn t :r'i -t :rn iu N<~v,.rtou 's intm-polation polynomial P,, based on (6.4. 2) it
n~duc:es to Taylor polyuumial.
Example 6.4.1. Based 011 tahle of values of fimction :1: -t ch:r: f(mn ta.IJle of divided
differences and write Netytdn 's int:mpolatio11 J>oly11omial.
k 0 1 2 3
:r:,,, 0.0 0.2 0.5 1.0
f(:r:k) 1. 0000000 1.0200668 1.1276260 l.G430806
:t: - :r:o
Let :1: = :1:o +ph (0 ~ p ~ n), i.<-~. p = . Because of
1/,
EPf'
. 0 ~ (p)~h:f·
=D k ~ (p)~A:f·
.0 = D k . 0 +R·n (f'···)
. ':L '
h:=O h:=O
l.e.
obtained in a given way, is call<~cl first Newton':-:; interpolation polynomial. This polyno-
mial can be defined recnrsively as.
(0.6.1)
.I 2
. K(:r:) ds =.
;·
b
(1
S"(:r)2
+ S'(:r) 2)G/ 2 d:r
L n
Lesson VI - Approximation and Interpolation lOS
and stabilize<l slmp<{ it tak<~s is such that minimiz<~s (6.6.1) under given limitations.
In the similar way is ddined mathematical spline, by discarding S' (x) 2 in nominator
of (6.6.1), what is dose to previous case, wlwn S'(:r:) << 1. Thus, now is to minimiz<~
the int<-~gral
!J
Mathematical splin<~ can be uwre generally defined by using higher derivative~ tlwu two
in (6. 6. 2) .
First results regarding splin<~ fuuctions appeared in papers of Quade and Collat.z
( [11], 1938) an< l Coura.nt ( [12]. In 1946 mathematicians started studying the spline
shape, and derived the pi<~cewisc polynomial formula. known as the spline cmve or
function (Schoenberg [13]). This has led to the widespread us<-~ of such functious in
computer-aided design, csr><-~cially iu the surface desigus of v<~hicles. Scho<-~nherg gave
the spline function its name after its resemblance to the mechanical spline used by
draftsmen. The origins of the spline in wood-working may show in the conjectured
<~tymology which connects the wonl spline to th<~ word splint<~r. Later craftsmen have
made splines out of rubber, ste<~l, and other dastomeric materials. Spline devices help
bend the wood for pianos, violins, violas, etc. The Wright brothers used one to shape
the wings of their aircraft.
The extensive development of spline functions and usage of their approximation
properties begun in sixtic~s last C(~ntury. Th<~ splines are greatly applied to 1mmc-~rical
mathematics, in particular to int<~rpolation, numerical differentiation, numerical int<-~
gration, differential equations, etc:. The extremal and approximative attributes of so
known natural cubic spliw~ arc given in [1], pp. 81-86.
Let on segment [a, b] given IH~twork of nodes
(6.6.3)
Denote with P.,~~, set of algebraic polynomials. of order not greater than m.
Definition 6.6.1. Function
is called polynomial spline of degn~e m, and deif~ct k (1:::; k:::; m.) with nodes (6.6.3), if
satisfies tile conditions
1° Sm. E Pm on every snbsegment [:r:.;,-r, :r:.;,] ('i = 1, ... , ·n),
2° Sm. E c·m-li:[a, b].
Points .'D.;, are called nodes of spline. --
We will further consiller polynomial splines of defect 1 and for Sm(:r:) = Sm,l(:~:) say
to be a spline of degree ·m. Very impmtant kind of splines, interpolation cubic .splill<~,
with rn = 3 are most frequently used and applied in engineering design. Therefor<~ we
join to the network nodes 6.n rcalmnnbers .fo, .fr, ... , f.n·
Definition 6.6.2. Function S 3 (:r) = S 3 (:1:; f) is called interpolation cubic spline f(n·
fimction f on tile netvvork 6.n (n 2: 2) if the following conditions are fulfilled:
1° S 3 (:r, f) E P 3 if :r: E [:L;.-1, :r.i] ('i = 1, ... , n),
2° S3 (:r:; f) E C 2 [a, bJ,
3° · S 3 (;c.;,; f)= f.;,= .f(:r:.;,) ('t = 0, ... , n,).
We see that condition 3° does not appear in Definition 6.6.1. The spline defined
in this way is called simple cubic spli1ie. It interpolates function .f in network nodes
(condition 3°), it is continuous on [a, b] together with its derivatives SH1:) and S~(1:)
(condition 2°) and defined on every .subsegment between neighbor nodes with polynomial
106 Numerical Methods in Computational Engineering
of degTee not greater than 3. So, the third derivative of cubic spline· is discontinuous,
being, part by part of constant value.
Cubic spline has two free parameters, determined usually by some additional bound-
ary conditions. The typical ones are: ·
(6.6.5)
(6.6.6)
6. 7. Prony's interpolation
Dating from 1795, Prony's interpolation ([14]) is often known as Prony's exponen-
tial approximation, and until nowadays not applied as it by its sophisticated nature
deserves. It is suggested to students and readers to apply the following formulas in de-
veloping algorithms for pro!2;rannning of Prony's method. Some benchmarking research
for comparison of applic(),tion of this method, cubic spline, and, for exmnple, least square
method to some physical problems is also strongly encouraged.
If we have interpolation function of form
' f·(· ·)
,[,
~
-
(' 1 cPt;r; + ('-2
> >
(,n2tr:
' + ... + ("
"11
ea,nx
where l'·h: = P.nh. If function f is given on set of equidistant points { (xk, .h) h=D,1, ... ,2n-1,
and :r:k- :r:k_ 1 = h = r:on.'d (k = 1, 2,.,., 2n -1), by replacing x = .1: 0 +kh data S(~t can
be replaced by {(k, .fdh=<LL .. ,2n-1, where :c = 0, 1, ... , 2n-1. By setting interpolation
problem
+ c2 + ... + Cn = fo
c1
.
(.1//,1
N-l . N-1
+ C2fl2 . N-1
+ · · · + CnJ.l•n =.
f N-1·
If p.'s are known (or pn~assigned.) awl N = n, system (6.7.2) is to be solved exactly <ts
system of linear eqnations, and if N > n approximately by h~ast squares method (se<~
next chapter).
If 11.'s are to he det<-~nnined, we need at least 2n. equations, but we have system of
nonlinear equation, which, as we know, in gen<~ral case could he unsolvable. Therefore,
we can assume that t"'s arc the roo·ts of algebraic polynomial of form
By multiplying all equations in (0.7.21) by CY.n, o~n- 1 , ... , a 1 , 1, we get the system
If determinant
fo fn-1
h fn
LECTURES
LESSON VII
7.1. Introduction
This chapter is clc~vot.c~cl to approximations of functious most applied in <liffc~reut
areas of scic~ucc~s ancl eugineeriu p;.
Let functicm f : [(J,./J] -t R p;ivc~u by set of valnc~ pairs (:r 1. f 1) (.j = 0, 1, ... , n1.)
where
fi = f (:ri). Consider the pro1>lc~m of approximation of function f by linc~a.r approxima-
tion function
I n
<D(:r:) = <D(:1:; rJ.o, ... , an)= L (l.i(h(:r:).
·i.=O
which in ge~ncral case cloes not have solution, i.e. all equations of system (7.1.1) can not
l>e conteml>Ol'ary :-;atisfied. If we clchm~ c)n l>y
'TI,
where ''1/1,
llr5nll·,. = (L l(5n(:1:j)l~') 1
/·r· (T 2: 1).
j=O
The equality (7.1.:3) gives the~ critc~ria. for detcnnination of paranH~ters ao, a1 ..... U.n in
approximation function <D .. The~ quantity llr5.~. 11'1'1 which exists always, is called the value
of he:-;t approximation in f1'. Optimal values of parameters a.;. = ~.;. ('i = 0, 1, ... , n) in
sense of (7.1.3) give tlw 1H~st rr approximation function
'//.
(~*(:r:) = L~·i(;).;(:I:).
·i=O
109
llO Nurnerical Methods in Computational Engineering
and
By introducing weight function p : (a, b) ---+ R+ the more general case of mean-square
approximations can be considered, where the corresponding norms for discrete and
continuous case are given as (see [1], pp. 90-91)
(7.1.4)
and
(7.1.5)
a.
respectively.
Example 7.1.1. Function :r: ---+ f(:r:) :x: 1 13 is to approximate with function :r ---+
cp(:r:) = ao + a 1 x in space
Here we have c5 1 (:r) = :r: 1 13 - a0 - a 1 :r; (0::; :1:::; 1). (see {1}, pp. 91-93).
1° We get the best L 1 (0, 1) approximation by minimization of norm
1
l
1
Having in lllill< 1 that r) 1 chaugc~s Hign on S<~guwnt [0, 1] in points :r 1 awl :r:2 ( S<~<~ hg.
7.1.1). system of equations (7.1.G) n~dnces to system
1 1
'}'
"2-. 1 -
'(' - 2' '1'2
" 2 - .. 1-2'
'['2 -
X . X
1
()J - -2
' ;· (.1.
'.,1/:3 - ao -- a ' '.z.)d.i.- 0,
-. -- 1' ''-
dao .
()
1
()J
-. - -- -2' ;· .r.(.1.
,, ,.1/3-'- ao - a1.1.)d.r.
,.. ,. -- 0,
dn.1 .
() '
it follows
1 3 1 1 3
ao + 20.1 = 4' -ao + ·~o.1 = -.
2 3 7
112 Numerical Methods in Computational Engineering
1.e. ao = ao = ¥, a1 = a1 = 9
14 , so that the best mean-square approximation is
given with
<D* (x) = 73 + 149 x o.,: 0.42857x + 0.64286x.
3° For determining min-max approximation we will use the following si1i1ple geomet-
rical procedure. Through the end-points of curve y = j(1:) = x 113 (0 :::; J; :::; 1) we
will draw the secant, and then tangent on the curve which is parallel to this secant
(see Fig. 7.1.2). The corresponding equations for those straight lines are
2V3
y= Ysec = :r and Y = Yta.n = X + - 9-'
1 . }3
<D*(:r:) = 2(Y.sec + Yta:n,) =X+ g ~X+ 0.19245,
n
<D (:r:) = L a.i. <D (:r:)'
·i.
·i.=O
where { <Doi.} is system oflinear independent functions from the space L 2 (a, b), with sc:e1lar
product introduced by
I!
I!
of
-.-. = 2
;· p(:r:)(.f(:r:)- l.:o
n
..i.<D·i.(:r:))(-<D.i(x))&r: =
0
0 (j = 0, 1, ... , n)
oa 1 .
().
·i.=O
Matrix of this system is known as Gram's matrix. It can be shown that this matrix is
regular if system of functions {<I>.;.} is linearly independent, having unique solution of
given approximation problem.
System of equations (7.2.1) can be simple solved if the system of functions {<I>}
is orthogonal. Namely, then all oif-<liagonal elements of matrix of system are equal to
zero, i.e. matrix is diap.;onal one, having as solutions
It can be shown that by taking in the giv<m way chosen parameters ai. ('i
0, 1, ... , n) the function I reaches its minimal value. Namely, because
n
(pI= 2 L II<I>k:ll da~, > 0.
2
i.=O
Thus, the best L 2 approximation offunction .fin subspace Xn = L( <P 0 , <P1; ... , <I>n),
where { <P.;} is orthogonal system of functions,· is given as
(7.2.3)
(k: E No)
needed for further considerations (see [4], pp. 92-93). By partial integration over the
integral
+1
Nk:-1 - Nk: -
-. ' 2(k:-1)
X (1 - X,2 ) 5/2
. d.r, (kEN),
./ -1
114 Numerical Methods in Computational Engineering
5 . 2k- 1 3
we get Nk:-1 -Nr.: = 2k _ 1 Nk:, 1.e. Nk: = k + Nk:- 1, (kEN) so that, with No= 7r
2 4 8
(2k- 1)!!
we have Nk: = 37r ( k + )!! (kEN). Starting from natmal basis {1, x, x 2 , .. . }, using
2 4
Gramm-Schmidt orthogonalisation, we get subsequently
<Do(:z:) = 1
(x, <Do)
<D1(:r:) = :r- ( ) <l?o(x) == x,
<Po, <Do
2 1 2 1
(T) ('l')-
±2., - , 'I'
, - N 1 N-
0 - . 1'
- , - -6'
Because of
2 1
(f, <Do) = 5' (f, <J?I) = 0, (f, <1?2) =
21
, (f, <1?3) = 0,
* 16 128 2 1 . . ' 2
<J? (:z:) = - + -(:r - -) ~ 0.14551309 + 1.1641047x .
157r 357r 6
This function is, in addition, best approximation in the set of polynomials of degree not
greater than two.
Some further very valuable c:ousicleratiom; can be found in [1], pp. 96-99.
in sense of minimization of norm (7.1.4), where ]J: [a, b] -7 R+ is given weight function
and r5n cl1~finecl by (7 .1. 2). By involving the matrix notation
.i=O
DF _ 2 ~·(···)-
!-) . -
(· ·)86n(:.c.i)
~ p J,.J ()n .r1 ,
_
- 0 (i=0,1, ... ,n)
ua.,. .i=O aa.;.
for determination of parameters a.i ('i = 0, 1, ... , n). The last system of equations can
be given in matrix form
XTP·u= 0'
I.e.
(7.3.4)
Note that normal system of equations (7.3.3), i.e (7:3.4) is obtained from overdefined
system of equations (7.1.1), given in matrix form as
Xa=.T,
by simple multiplication by matrix X Tp from the left side.
Diagonal matrix P, which is called weight matrix, is of meaning so that larger
weights P.i p(:r:.i) are assigned to the values of function f.i with greater accuracy. This
is of importance when approximating experimental data, which are obtained during
measures by different accuracy. For example, for 'measurements realized with different
dispersions, which relations are known, the weights P.i are chosen as inverse of disper-
sions, i.e, such that
1 -- 1 1
Po : PI : ... : Pm = 2 : 2 : ... : -2-.
. 0' 0 (} 1 O'm
When the measurements are realized with same accuracy, but with diff'erent numbers
of meas1u·ements, i.e. for every value of argument x · are proceeded m..i measurements,
and for fj taken arithmetic means of obtained re~mlts in series of measurements, then
for weights are taken numbers of measurements in series, i.e. Pj = mj (j = 0, 1, ... , m.).
Nevertheless, usually are the weights equal, i.e. P is unit matrix of order m + 1. In this
case, (7.3.4) reduces to
(7.3'.5)
In case when the syt>t<~rn of basic functions is chosen so that <P.i(:r) :r'i ('i.
l
0, 1, ... , n) w<~ hav~~
:ro :r2
'() 0
:rl '1'2 :r:'~,
. '1 x"
X=
where D = sns22 - si 2 .
Example 7.3.1. Find panmwters o. 0 n.nd o. 1 in approxinwtion function <D(:r:) = ao+o. 1 :r:
using lea.st-squares method. f(n fuuction given in tabular form, as a set of values pairs
For weight matrix P we can take unit matrix. The previously given formulas can
be directly applied, but we can start from overdefined system of equations
[j ao]
1.11 2.51
3.2
1.9 . [ h •
4. 2 (],1
[ 4 ,;)
G.1 6.0
13.3]· [ a 0
G9.67 a1
l 16.2]
[ 64.33 '
wherefrom it follows
[ao]
(),1 = 61.79
1 [G9.!>7
-1:3.3
-13.3]· [ 16.2]
4. 64.33
1. 797 4G91]
[ 0.6774559 '
Iu mauy an~a~ M' ~ci<~uc<~ and t<~dmolop;y, d<~aliug with P.xperimental data. w<-~ hav<~
oft<~npw hkm of paranwt.cr d<~t.<~nuiuation iu ~o ]mown P.mpiricnl formula~ which <~xpn~~~
hmct.ional rdation h<~tw<~cu two m 11101'<~ variabl<~s. Fm <~xmuplf~, ld hmctional ndat.iou
he giwm as
:tJ == f (:r:; (J.o, u.l, · · · , an),
wlwn~ o..; ('i. = 0. L .... '11.) arc paraJtwt-.<~rs which an~ to lH' dd.<~nuim~d nsiup; the followiug
t.almlat<~d data o ht.aiw~d hy nH~a~llr<~lll<~nt.
'/, () 1 m.
:r.;. :J:o :r:l :r.m
.1/i :t/o :t/1 Urn
Th<-~ measnn~d data coutaiu accid<~utal cnors of measm·eluP.uts. i.<-~. ''1wise" iu cx-
p<~rinwut. Dd<~nuiuntion of parauwters (J..i ('i. = 0, L ... , '11.) is, from the point of tlwmy
of approximation, possi1Jl<~ <mly if m. ~ n. Iu cas<~ of m. = n, we lmvP. interpolation, which
is, iu gcw~ral cas<~ nouliw~ar, what. dcp<~uds on fnuct.iou shape. In on kr to dimiwtt.<~
"noise" in data, awl obt.ain gn~atcr accuracy awlrdiahilit.y, the umnber of nH~asnreuwnt.s
shonl<l h<~ larg<-~ Pnoup;h. Tlwu, t.h<~ most. ns<~cl uwtlwd fm clet<~rmination of parauwtm·s
is least-square nwtho<L i.e. minimi,;ation of variahlf~ F clPfinecl by
'IJJ,
or nsmg
'Ill,
where arc inclwkd weights Jlj· If fnnctionalrdation between s<~veral variables is givr~u
as
Z = f(:~:, ;tJ; (J.(), a1, · · ·, (!.n)
Iff is linc~ar approximation funct.i.on (in parameters a 0 , a 1 , .... an), i.e. of form (7.:3.1),
the prohl<-~m .is to 1H~ solvecl in pn~vionsly <~xplaine<l way. Nevc~rt.hdess, iff is nonlinear
approximation function, then the concspml<'iing; nmmal system of equation
uF
-=0 ('i = 0, 1, ... , n)
(7.3.7)
ua-i
is nonlinear. Fm solving of this system can be nse<l some method for solution of syst<~m
of nonlinear eqnat.ions, like N cwton-Kantmovich method~ whereby this proceclme is
rather complicat.c~d. In order to solve problem in c~asier way, there are some simplihecl
methods of transfonuation of snclt problems to linear approximation nwt.hod. N amdy,
hy introducing some sulJstitnt.ions, like
For example, let :1;---:- .f(:r:; a 0, o, 1) = a0 ea 1 :r'. Then, by logaritmization and substitu-
tion ·
X=:c, Y=logy, b0 =loga 0 , b1 =o, 1,
the problem reduces to lirwar one, because now Y = b0 + b1 X. Thus, by minimization
of
·m.
(7.3.9) G = G(bo, h)= I)Y;- bo- b1Xjf,
j=O
where X.i = :r:_i and ~i = log Y:i (.j = 0, 1, ... , n), we determine the parameters b0 and
b1, and then
ao = ebn and a1 = b1.
Nevertheless, this procedure does not give the same result like the minimization of
function
'In
Moreover, the obtained n~sults can significantly deviate, because the problem we are
solving is different from stated om~, having in mincl transfonnation we have done (Y =
logy). But, for many practical engineering problems the parameters obtained in this
way are satisfactory.
We will specify some typical functional dependencies with possible transformation
of variables.
1° y=o, 0 :c 1]. 1 , X=log:r, Y=log:y, bo=logao, b1=a1;
2° y = a0 a'[, X= :r:, Y =logy, bo= logan, h = loga1;
(},1 1
3° ''lJ = a 0 + -.
:J: '
X= -.
:r: '
Y = 'IJ, 1J 0 = ao, b1 = 0,1;
'
0 (1,1
4 :IJ = ao + -,
:I:
.X= :1:, Y = :ry, bo = a1, h = ao;
1
5() :lJ= - - - -
+
ao a1:1:'
:r:
Example 7.3.2. Rc.'mlt of 11H!<I.SHrcmeuts of values 1: and y are gwen 111 f(>llowinp,
talmlar f(mn.
'/, 0 1 2 3 4 s
:r.;. 4.48 4.98 5.GO G.ll 6.62 7.42
Y·i. 4.15 1.95 1.31 1.03 0.74 0.63
1
Ify reduce to linea.r prohlem and approximate llsmg least-.s·quare
(/,() + (J,l:r:'
method.
By involving X = :r:. Y = 1 /:1; and nsing 1<-~ast-sqnare method we get approximation
1
fnnction <D(X) ~ 0.4u8X- 1.843. wlwrefrom it f<lllows Y ~ OAGS:r~ _ 1. ·
843
L<~ssou VII - Best Approxilllation of Ft1nctinus ll~J
From the previof1s mw can condnde that. dep<~ucling on f the collv<~nient replac<~
ments (7.3.8) shoul<l be dwseu so that they Puabl<~ reducing of
(7.3.10)
!t i~ clear that functions .11 and h umst have their inverse functions, so that (7.3.10) is:
m fact, equivalent to
h- 1 (Y) = f(_q- 1 (X); ao, a1, ... , a.,),
whereby parameters b.;, depeucl on parameters a.;, in rather simple way.
[6] Press, W.H., Flannery, B.P., Teukolsky, S.A., and Vetterling, W.T., Numerical
Recepies - The Art of Scientific Computing. Cambridge University Press,
1989.
[7] Milovanovic, G.V. and Wrigge, S., Least squares approximation with con-
straints. Math. Comp. 46(1986), 551-565.
[8] Milovanovic, G.V. and Kovacevic, M.A., Zbirka resenih zadataka iz nu-
mericke analize. Naucna knjiga, Beograd, 1985. (Serbian).
[9] Dordevic, D.R., Ilic, Z.A., Realization of Stieltjes procedure and Gauss-
Christoffel quadrat'ure formulas in system Mathematica. Zb. rad. Gradj.
fak. Nis 17(1996), pp. 29-40. .
[10] Todd, .J., Introduction to the Constructive Theory of Function§. New
York, 1963.
[11] Schoenberg,I..J., Contributions to the problem of approximation of equi-
distant data by analytic functions. Quart. Appl. Math. 4(1946), 45-99; 112-
141. -
[12] Ralston,A., A Pirst Course 'in Numerical Analysis. McGraw'-Hill, New York,
1965.
[13] Brass, H., Approximation durch Teilsummen von Orthogonalpolynom-
reihen. In: Numerical Methods of Approximation Theory (L. Collatz, G. Meinaruf:i,
H. Werner, eels.), 69-83, ISNM 52, Birkhauser, Basel, 1980.
[14] Brass, H., Error estimates for least sq'uares approximation by polyno-
mials . .J. Approx. Theory 41(1984), 345-349.
[15] Collatz, L. and Krahs, W., Approximationstheorie. Tschebyscheffsche Ap-
proximation with Anwendungen. B.G. Teubnc~r, Stutgart, 1973.
[16] Davis, P ..J., Interpolation and Approximation. Blaif:idell Publ. Comp., New
York, 1963.
[17] Feinerman, R.P. & Newman, D ..J., Polynomial Approximation. Williams &
Wilkins, Baltimore, 1974.
[18] Fox, L. & Parker, I.B., Chebyshev Polynomials in Numerical Analysis.
Oxford Univ. Press, 1972.
[19] Hildebrand, F.B., Introduction to Numerical Analysis.Mc.Graw-Hill, New
York, 1974.
[20] Rivlin, T ..J., An Introduction to the Approximation ad Functions. Dover
Publications, Inc., New York, 1981.
[21] IMSL Math/Library Users Manual, IMSL Inc., 2500 City West Boulevard,
Houston TX 77042 ·
[22] NAG Fortran Library, Numerical Algorithms Group, 256 Banbury Road, Ox-
ford OX27DE, U.K., Chapter F02.
Faculty of Civil Eugirwminp; Faculty of' Civil Engirw<~ring and An:hitect.m<~
Belgrade ~ NiH
Mast<~r Study Doctoral Study
COMPUTATIONAL ENGINEERING
LECTURES
LESSON VIII
be considered.
8.1.1. Introduction
The need for mmH~rical differentiation appears in following cases:
a. When values of function an-~ known only on discrete set of points on [a, b], i.e.
function f is given in tabular form;
b. When analytic:al expression for hmction f is rather complicated.
Numerical differentiation is chiefly based on approximation of function f by func-
tion <D on [a, b], and then <liffen~ntiating <D desirable times. Thus, based on f(:r) rv
<D (x) (a :S: :r :S: b)), we have
For function <D are mostly taken algebraic interpolation polynomials, because being
simple differentiating. Let <D be interpolating polynomial of 'ri-th degn~e, i.e.
<D(:r:) = Pn(:r:).
121
122 Numerical Methods in Computational Engineering
Construct over set {x.;,,x.;,+l, ... ,:ri+n} (0 ::; i ::; m- n) first Newton interpolation
polynomial (see Chapter 6) ·
Po .(x) = f·.
n , '·
+ p ~ j'.·'· + P (p2!- 1 ) ~ 2 1·.
~
+ P (p ~ 13!) (p - 2)
u
A 3
1.~ + ...
. ·
(8.1.2.1)
P (p- 1) ... (p- n + 1) A nf··
+ n.' u 'tl
. p = (1:-
wh ere :1:.;,
1 dPn(x) , by d1fferentlat10n
)/ 7L. B ecause Pn1 ( x ) = - . . . (8.1.2.1) we g·et
h dp . .
2
(8.1.2.2) 1 (• ·) _
Pn:J, ---;;,~./.;.+
1( 2p - 1 2 .
~Ji+
3p - 6p + 2 ~Ji+
3
... ).
2 6
By further differentiation of (8.1.2.2) we get, in turn P;;, P;:~, and so on. For example,
f'(x.;,) =*(f.;,- .f.i-1) + ~.t" (Til), (:ri-1 < ' 11 < x.;,),
2
1 ( . . . h
h, 3.f.;.- 4.f.;,_1 + .f?.-2) + 3.f
·I ( ) ,Ill
j Xi = (T/2), (Xi-2 < 7/2 <Xi),
2
~6 1• (llf.;.- 18.fi.-1 + 9.fi-2- 2.fi-3) + 1~ .f 1 V (713)
3
Previous formulas fm: first derivative in node :r;i are obviously asymmetric and are usually
applied on the interval [o., IJ] boundaries. Typical application of these formulas is at
approximation of differentiable boundary conditions in contour problems of differential
equations.
For nodes inside of segment [a, h] is better to use symmetric differentiation formulas.
where
The most usf~cl and simplest f()rmula for approximation of second derivative is
D 2 .f'-
'1. - 2_(f'·
1), 2 ' 1.+ 1-2f'·+f'·
. '/, ' '1,- 1)+T(j) )
Lesson VIII - Numerical Differentaiation and Integration 123
8.2.1. Introduction
Numerical integration of functions is dealing with approximative calculation of
definite integrals on the basis of the sets of values of function to be integrated, by
following some fornnila.
Formulas for calculation of single integrals are called quadrature formulas. In
similar way, formulas for dm.1ble integrals (and multi-dimem;ional integrals, too) are
called cubature formulas.
In our considerations, we will deal mainly with quadrature formulas.
The need for numerical integration appears in many cases. Namely, Newton-
Leibnitz formula
·b
(8.2.1.1) . a. f (x) dx = F (b) - F (a) ,
/
where F is primitive function of function .f, cannot always be applied. Note some of
these cases:
1. Function F cannot be represented by finite number of elementary functions (for
•)
r· 1.+dxx 3
=log \/Ia+ II- -6~ log(a 2 - a+ 1) + ~3arctg2a~a·
./ 0 v<>
3. Integration of functions with values known on discrete set of points (obtained, for
example, by experiments), is not possible by applying formula (8.2.1.1).
Large immber of quadrature formulas are of form
(8.2.1.2)
1
where .h = f(:r:k) (a :S :r: 0 < ... < Xn :S b). If .To =a and Xn = b, formula (8.1.1.2) is
of closed kind, and in other cases is of open kind.
For integration of differentiable functions are used also formulas which have, in
addition to function values, values of·its derivatives. The formulas for calculation of
integrals of .form
·b
. a. p(x)f(:r;)d1:,
/
On the ba:-;is of tlwH<~ <lata., w<~ can construct Lagrange interpolation polynomial
1.e.
(8.2.1.3)
where we put
., p(;r)w(:~:)
Ak=
· !
. n (:r - :r:J.:)w' (:r:J.:)
dx (k=0,1, ... ,n.).
In formula (8.2.1.3), Rn+ 1 (f) is called rmnainder term, residue (rest, residuum) of
quadrature formula and reprcsentt-> error dmw by replacing of integral by finite snn1.
Index n, + 1 in nmmiwl<~r term <lcnotes that int<~gral is approximate calculated based on
values of function to be integrated in n + 1 points.
Denote with rrn set of all polynomials of degree not greater than n.
Because .f(:r:) = :r": (k = 0, 1. ... , n), f(:r) = Pn.(:r), we have Rn+l(:rh:) = 0 (k =
0, 1, ... , n), wherdrom we conclude that formula (8.2.1.3) is exact for every f E rrn,
regardless of choiu~ of int<~rpolatiou nodes :1: 1.: (k = 0, 1, ... , n) and in this case we say
that (8.2.1.3) has algebraic <lcgn~e of accuracy n.
(8.2.2.1) w(:r) = (:r:- :1:o)(:1:- :r:I) ... (:r:- :r:.,.) = hn+ 1 p(p- 1) ... (p- n)
and
w'(:rk) = (:1:1,:- :ro)(:I:J,:- :1:1) ... (:1:1.:- :I:J,:-l)(:r:k- :r;l.:+l) ... (:r,,:- :r:n)
(8.2.2.2)
= 11. 11
( -1)"-l.:!.:!(n- k)!
By introducing notation for gcnerali7:ed d<~gree :1:(s) = :r(:1: -1) ... (:r- s + 1), based
on (8.2.2.1), (8.2.2.2) and n~:-mlts from previous section, we get
(k = 0, 1, ... , nL
l.P.
A,,: = (IJ- o.)H1,: (k = 0, 1, ... , n),
Lesson VIII - Numerical Differentaiation and Integration 125
where we put
(8.2.2.3)
_
Hk: = Hk(n) -
_ ( -1)n-h:
1
(n) •
~r·n p(n+l)
• dp (k = 0, 1, ... , n).
n.n k . 0 p- k
X~n~b ~ b- a
(8.2.2.4) f(:r:)(11: = (b- a) L Hx:.f(a + k - ) (kEN)
. n
:r;o=a. k:=O
as Newton~Cotes formulas.
Further on we will give survey of Newton-Cotes formulas for n ~ 8. Here we use
denotations h = b- a, h: = .f(.'EA:) (k = 0, 1, ... , n).
n
1. n = 1 (Trapezoid rule )
X!
2. n = 2 (Simpson's rule )
I..
~£·)
-
. .f(:r:)d.'E =
h .
3 (./o + 4h
.
+h)-
h
5
90
! IV( )·
6 ,
xa
3. n = 3 (Simpson's rule t)
xa t r.:::
4. n = 4 (Boole's rule)
.l
xo
f(x)dx = ~~ (7fo +32ft+ 12/z + 32h + 7f•)- !~~!vi (e.);
5. n = 5
.j'' f(:E)dx.
.'{;5
=
5h . . .
(19.f0 +75ft+ 50]2 + 50h + 75!4 + 19!5)-
275 1 ·(6)
12096
f (~5);
} 7
288
xo
6. n = 6
:r;ci
:r;o
/ f(x )dx =
1 ~0 (41f 0 + 216.fl + 27 h + 272h + 27 !4
9h9 j'8(t )·
+ 216!5 + 41!6 ) ~ 1400 <,6 '
126 Numerical Methods in Computational Engineering
7. n = 7
I.'(
.'L7
)
.. .f x dx =
7h
17280
(751.fo + 3577 h + 1323h + 2989.!3 + 2989f4
xo
8. n = 8
.I ()
xs
. f :r d.r =
4h
14175
(989fo + 5888h - 928h + 10496h- 4540f4
y
y = f(x)
/~
I
Figure 8. 2.3.1
By applying the trap<~zoid formula on every subsegment, we get
Le.
. IJ
. '3 L'/ . .f
/, ·II
I
.(}.
f(:r)rl:J: = Tn-
12 ·i=1
((i),
L<~ssou VIII - N muerical Diffr.n~ntaiation ai1d Int<~gration 127
where
Tn. = Tn(.f: h)= h(~fo + h + · · · + fn.-1 + ~fn.)
and
.'J'·
"1.-1 < '-,·!.
(,. < '"/,
'['. (·,·-
. - 1' 2' ''1)
..• ' ,, .
{,
. .( ) (b - (},) 2 •// )
./
.f :~: d:r:- Tn. =- 1'). 2 .f ((
~11.
(a<(<b)
.(}.
holds.
Quadrature formula
,,
./ f (:1:) rl:r ~ Tn. (f; !1.) (h=b-a)
'II.
(),
where
y = f(x)
I
I
I
X
x =a X X =b
0 2n-:-2 2n
Figure 8.2.3.2
128 Numerical Methods in Computational Engineering
holds.
(8.2.4.1)
/' /'
T(O) -t T(1) -t
2 2
/'
T(O) -t
3
by taking k = 0, 1, ... and m. = 1, 2,.. .. In first column of this table arc in turn
approximate values of integral I obtained by rm~etns of trapezoid formula with h~;: =
(b- a)/2k: (~: =.0, 1, ... ). Second column is obtained based on the first, using formula
(8.2.4.1), t.lurd from second, aml so on.
Iterative process, <lefincd by (8.2.4.1) is the standard Romberg method for numer-
ical integration. One can proV<~ that series {T~;~·m.) hENa and {rtn.) }mE No (by columns
and rows in T-table) converge to I. At practical application of Romberg integration,
iterative process (8.2.4.1) is usually interrupted when ITJm)- r(;m- )1 ~ f, where cis
1
Program 8.2.5.1.
For integration using g<~m~raliz<~<l Simpson's formula the subroutine INTEG is writ-
ten. Parameters in parameter list are of meaning explained in C- comments of suhpro-
gram source coch~. Function to 1w integ;ratecl is given in subroutine FUN, and clepencls
on one panuneter Z. By intc~gcr parameter J is provided simultaneous specifying more
functions to integrate.
Lesson VIII - Numerical Differentaiation arid Int1~gration 129
1
cz:r:
1 +z
0
2 2 (h: (z = 1.0(001)1.5),
0 :r:
0
1/2
2
log(:1: + z) sin:cd,·
1
0
1
°
z 2 + e' :c
1
'
,J, (z = 000(001)005)0
25 N=2*N
H=0.5*H
C NUMBER OF INTERVALS IS DOUBLED AND
C NEW VALUE FOR S1 IS COMPUTED
S1=S1+S2
SO=S
GO TO 10
30 KBR=1
35 RETURN
END
FUNCTION FUN(X,Z,J)
GO TO (10,20,30) ,J
10 FUN=EXP(Z*X)/(X*X+Z*Z)
RETURN
20 PI=3.1415926535
FUN=PI*SIN(PI*X*Z)
RETURN
30 FUN=ALOG(X+Z)/(Z*Z+EXP(X))*SIN (X)/X
RETURN
END
EXTERNAL FUN
OPEN(8,File='Simpson.IN')
OPEN(6,File='Simpson.out')
WRITE(6,5)
5 FORMAT (1H1,2X, 'IZRACUNAVANJE VREDNOSTI INTEGRALA',
1 ' PRIMENOM SIMPSONOVE FORMULE ' //14X,
2 'TACNOST IZRACUNAVANJA EPS=1.E-5'
3 ///11X, 'J' ,4X, 'DONJA' ,5X, 'GORNJA' ,3X, 'PARAMETAR',
4 3X,' VREDNOST'/ 16X, 'GRANICA', 3X,'GRANICA',
5 5X,'Z' ,7X,'INTEGRALA'//)
DO 40 J=1,3
READ (8,15) DG, GG, ZP, DZ, ZK
15 FORMAT(5F5.1)
Z=ZP-DZ
18 Z=Z+DZ
IF (Z.GT.ZK+0.000001) GO TO 40
CALL INTEG (DG,GG,S,FUN,J,KBR,Z)
IF(KBR) 20,25,20
20 WRITE (6,30)
30 FORMAT (/11X, 'INTEGRAL NIJE KOREKTNO IZRACUNAT'/)
GO TO 18
25 WRITE (6,35) J,DG,GG,Z,S
35 FORMAT (11X,I1,F8.1,2F10.1,F15.6/)
GO TO 18
40 CONTINUE
STOP
END
with accuracy 10- 5 . Routines codes and output listings are of form:
C=================================================================
C ROMBERGOVA INTEGRACIJA
C======================~==========================================
DOUBLE PRECISION GG, FUN, VINT
EXTERNAL FUN
open(6,file='rornberg.out')
EPS=1.E-8
WRITE (6,11)
11 FORMAT(1H0,5X, 'X', 7X, 'INTEGRAL(O. ,X)'/)
DO 10 I=1, 10
GG=0.1*I
CALL ROMBI(O.DO,GG,FUN,EPS,VINT,KB)
IF (KB) 5,15,5
5 WRITE (6,20) GG
20 FORMAT (5X,F3.1,4X,'TACNOST NE ZADOVOLJAVA'//)
GO TO 10
15 WRITE(6,25)GG,VINT
25 FORMAT(5X,F3.1,4X,F14.9)
10 CONTJ;NUE
STOP
END
SUBROUTINE ROMBI (DG,GG,FUN,EPS,VINT,KB)
DOUBLE PRECISION FUN,VINT,T(15),DG,GG,H,A,POM,B,X
. KB=O
H=GG-DG
132 Numerical Methods in Computational Engineering
A=(FUN(DG)+FUN(GG))/2.
POM=H*A
DO 50 K=1, 15
X=DG+H/2.
. 10 A=A+FUN (X)
X=X+H
IF (X.LT.GG) GO TO 10
T(K)=H/2.*A
L "'••
B=1.
IF (K.EQ.1) GO TO 20
.K1=K-1
. 'DO 15 M=1, K1
I=K-M
B=4. *B
15 . T(I)=(B*T(I+1)-T(I))/(B-1.)
·. 20 B=4.*B
VINT=(B*T(1)-POM)/(B-1.)
IF(DABS(VINT-POM).LE.EPS) RETURN
POM=VINT
50 H=H/2.
KB=1
RETURN
END
FUNCTION FUN(X)
DOUBLE PRECISION FUN,X
FUN=DEXP(-X*X)
RETURN
END
0 X INTEGRAL ( 0. , X)
.1 .099667666
.2 .197365034
.3 .291237887
.4 .379652845
.5 .461281012
.6 .535153533
.7 .600685674
.8 .657669863
.9 .706241521
1.0 .746824138
where area of int<~gration i:-; nnit circle, i.e. G = {(x, y) I x 2 + :t/ ::; 1}. Namely, for
numerical computation of the integral (8.2.6.1) in literature is known formula
when~ 0 iH origin, iA 0 = (0, 0), awl poiutH Nii have polar coordinates
.!/.J2-
G
---;=2=4=;:r:o=2=::::;:;: d:r:dy.
:r;2- y2
134 Numerical Methods in Computational Engineering
FUNCTION EF(X,Y,K)
GO TO (1Q,20,30),K
10 EF=(16.*X*X*Y*Y)/(1.+X•X+Y•Y)
RETURN
20 EF=SQRT(1.+Y•Y+(1.+X)••2)
RETURN
30 EF=(24.*X*X)/SQRT(2.-X•X-Y•Y)
RETURN
END
1
IZRACUNAVANJE DVOSTRUKOG INTEGRALA
1 PRIMER
VREDNOST INTEGRALA = 1.256637
2 PRIMER
VREDNOST INTEGRALA = 4.858376
3 PRIMER
VREDNOST INTEGRALA = 16.324200
N nmerical intq;ration of both discrete data and known functions are needed iu
engineering practic<~. Tlw proce<lnres for first case are based on fitting approximating
polynomials to tlw <lata alHl integrating the approximating polynomials. The direct
fit polynomial method works wdl for both equally spaced data and non-equally spaced
data. Least squan~s fit. polynomi<tl:-; can be used for large sets of data or sets of rough
data. The Newton-Cot<-~s formula:-;, which are based on Newton forward-difference poly-
nomials, give simpl<~ integration formulas for equally spaced data. Romberg integration,
which is extrapolation of the trapezoid rule is of important practical use. An example
of multiple integration is pn~sent<~<l a.s illustrative cas<~.
Of presented :-;implc uwthod:-; it i:-; likely that Romberg integration is most effici<~nt.
Simpson's rules are <~kgant, lmt the first extrapolation of Romberg integration giw~s
comparable results. Sub:-;equent extrapolation of Romberg integration increase the order
at a very satisfactory rat<~. Simpson's rules coul<l he d<~velopecl into an extrapolation
proceclnre, hut with no a<lvant.age ow~r Romberg integration.
Many comm<~n:ial :-;oftwan~ packages contain solvers for numerical integration. Some
of tlH~ nHn·e pnnuiw~nt :-;y:-;tem:-; e-m~ Mat lab and Mathe ad. Mon~ sophi:-;ticated sy:-;t<~m:-;,
snch a:-; Mathematica, Macsyma (VAX UNIX MACSYMA, Refen~ncf~ Mmmal. Syml>ol-
ics Inc., Camhridg<~, MA), awl Maple (MAPLE V Library Refen~nce Manual, Sprin12;<~1·,
NY, 1991) also contain llllll1crical int<~gration solv<~rs.
Some organizations lmv<~ own package:-; - colkction of high-quality routiw~s, like
ACM (Collccte<l algoritlun:-;). IMSL (Houston, TX), NAG (Numerical Algorithms Group,
DowmT:-; Grove, IL), all< l :-;ouw fcmton:-; indivi<lual padmges are QUADPACK (R. Pies:-;<~11S,
d all.), QUADPACK, A Subr-outine Package for A'ntomat'ic Integration.
Springer. B<~din. 19<'\:J). CUBTRI (Cnlmtme Fonunlac Over Triangl<~), SSP (IBI\11 Nn-
merical Softwan~).
The hook Numer"ical Rec1:pes ([4], Chap. 4) contains several :-;nhrontine:-; for
integration of fmH·tion:-;. Som<~ al)..>,uritlnns, from which :-;ome an~ co<l<l<~d. an~ giwn iu
lJOok N'umer'ical Methods for Engineers and Scientists ([3], Chap. G).
On tlH~ <~nd. in onlcr to giv<~ :-;omc hint:-; for soft.wan~ own <lcvdopnwnt or usage of
:-;oftwan~ p<L.ckag<~s, w<~ will p;ivc•. st.awlanl t<~st <~xample:-; for tc:-;tini? or lwnchmarking.
Lesson VIII - N nmcric:al Diffcn~utaiatiou aud Int(~~ratiou
1° ;·sin :r d:r; 2° ;· )tan :t: rh:; :3° ;· .. :r: :I~ . rh; 5° ;· log :r rl:r::
rl:r; 4° ;· - _
. . . .1.
3
- 1 . sm :r: . JX+l
Go ;· .Jf+""X :r: ~ rl:r:; 7o ;· c-n:t:2 rl:r:; 8o ;· ~ rh:; go ;· siu :~: rl:J:;
. + 1 - :r . . log :1: . :t: 2
10 0 ;· 1 rl:r::
. 2 +cos :r: ·
1o ;·4rr
. 0 2 + cos :r
1 'o
d:r; 2
;·=
. _=
sin :J:
- - rl:r; 3
:r:
• o ;··XJ e-:r:
. 0
r;;: d:r:;
v :r
4
0
./ ()
•CXJ
1-
?
.'['~0-.'
' ,,
'I"
? ..
e.-~.~-
d:r:;
5° ;·=
. ()
e-:r;
2
log 2 :r d:r:; 6°
·1
./ -1 .J,
1
:-:2 d:r:;
·CXJ
/,
. 1
..
o-:r;,I,ll/3
.' rl·r
.....,
LECTURES
LESSON IX
9.1. Introduction
Problems involving ordinary differential equations (ODEs) can always be reduced
to the set of first-order differential equations. For example the second order equation
(9.1.1)
dy
- - .'L
-Z ( ·)
(l:J;
(9.1.2)
rlz
- = T(:r.)- q(x)z(x),
(1,:J; .
where z is a new variable. This exemplifies the procedure for an arbitrary ODE. The
usual choice for the new variables is to let theni be just derivatives if each other, and, of
course, of original variable. Occasionally, it is useful to incorporate into their definition
some other factors in the equation, or some powers of the independent variable, for the
purpose of the mitigating singular behavior that could result in overflows or increased
roundoff error. Thus, involving new variables should be carefully chosen. The possibility
of a formal reduction of a differential equation system to an equivalent set of first-order
equations means that computer programs for the solution of differential equation sets
can be directed toward the general form
d:1h (:r)
(9.1.3) i
( :r:
= fi. (:c,
. - ( )
:U1, · · · , Yn) . i = 1, ... , n ,
where the .f,; functions are known and :Ul, y2, ... , Yn are dependent variables.
A problem involving ODEs is not completely specified by its equations. Even more
crucial in determining how to start solving problem numerically is the nature of the
problem's boundary conditions. Boundary conditions are algebraic conditions on the
values of the functions '.lh in (9.1.3). Generally, they can be satisfied at discrete specified
points, but do not hold between those points, i.e. are not preserved automatically by the
differential equations. Boundary conditions can be as simple as requiring that certain
variables have certain numerical values, or as complicated as a set of nonlinear algebraic
equations among the variables. Usually, it is the nature of the boundary conditions that
determines which nunH~rical methods will be applied. Boundary conditions divide into
two broad categories.
137
1\1 umencallVletllbds m lJomputatwnal ~ngmeermg
• Initial value problems, where all th~ Yi are given at some starting value x 8 , and
it is desired to find the the Yi 's at some final point x f, or at some discrete list of
points (for example, to generate a table of results).
• Two-point boundary value problems, where the boundary conditions are spec-
ified at mote than one x. Usually some conditions are specified at X 8 and the
remainder at :rf. . .
In considering methods for numerical solution of Cauchy problem for differential
equations of first order, we will note two general classes of those methods:
a) Linear mul.ti-step methods,
b) Runge-Kutta methods.
The first class of methods has a property of linearity, in contrary to Runge-K utta
methods, where the increasing of method order is realized by involving nonlinearity.
The common "predecessor" of both classes is Euler's method, which belongs to both
classes.
In newer times there appeared a whole series of n1ethods, so known hybrid methods,
which use good characteristics of mentioned basic classes of methods ..
The last formula defines Euler's method, which geometric interpretation is given in the
Fig. 9.2.1.
y •
y(x)
y Q. r· - -
IX
.
I v
,
)I
1
\ 0 1 ""'0
I
I
I'
I
I
o' x,
<.
X
3 X
Figure ~.2.1
Lessou IX - Ordinary Diff<~n=mtial Equations 139
Polygonal lim~ (io, :uo)- (:r1, Yl)- (:r2, :t/2)- ... is known as Euler's polygon.
The points :rn an~ usually chosen equidistantly, i.e. 1:n+l - :rn = h = canst.(>
0) (n = 0, 1, ... ) in which case (9.2.3) reduc<~s to
(n=0,1, ... ).
In this and followiug s<~ctions a general method for solving Cauchy problem
(9.3.2) (n = 0, 1, ... ),
whereby the linear relation among Yn, Yn+l and .f.n. exists. In general case, for evaluation
of series more complicated recunence relations than (9.3.2) can be used. Among the
methods originated from these relations, important role have the methods with linear
relation between Yn+i., fn.+i. ('i = 0, 1, ... k) and they form the class of linear multi-step
methods.
General linear multi-Htep method can be represented in form
A: A:
(9.3.3) L CV.-i.Yn+l = h L /Jfn.+i. (n = 0, 1, ... ),
(9.3.4)
140 Numerical Methods in Computational Engineering
where
h:-1 h:-1
shell be solved. When (x, y) --+ .f (:r, y) is nonlinear function which satisfies Lipschitz
condition in y with constant L, the equation (9.3.4) can be solved by iterative process
The condition given by this inequality ensures convergence of iterative process (9.3.5).
Let us for method (9.3.3) define difference operator Lh : C 1 [x 0 , b]--+ C[x 0 , b] by
h:
(9.3.6) Lh[y] = .L)a..iy(x + ih) - hf3iY 1 (x + ih)].
·i=O
(9.3.7)
Let .T --+ y(x) be exact solution of problem (9.3.1) and Yn series of approximate
values of this solution in points :rn = :r: 0 + nh (n = 0, 1, ... , N) obtained by method
(9.3.3), with initial values Yi = s.i.(h) ('i = 0, 1, ... , k- 1).
Definition 9.3.2. For linen.r multi-step method (9.3.3) one says to be convergent if for
every :r; E [:.co, b]
lim Yn = y(x)
:r:-+0
,,·-.r:o=nh
Linear multi-step method (9.3.3) can be characterized by first and second charac-
teristic polynomials given by
k: h:
respectively.
Two important. classes of convergent multi-step methods, which are met in practice
are:
1. Methods at which p(O = e-
e:-\
2. Methods at which p(E) = e'- (":- 2 .
Lesson IX - Ordinary Differential Equations 141
Explicit methocls of first class are called Adam-Bashfort.h methods, and the im-
plicit Adam-Moulton methods. Similarly, explicit methods of second class are called
Nystrom's methods and corresponding implicit methods Milrw-Simpson's.
Of course, there are methods that do not belong to neither of these classes.
72 hP
S-i (h) = y(:r:o) + hy' (:co) + 27,' y" (:ro)
0
+ ... + -,
• p.
y(P) (xo)'
because of 8-i.(h)- y(x: 1 ) = O(hP+ 1 ) (:r: 1 = x 0 +h). The same procedure can be applied
to determination of other initial values. Namely, in general case, we have
where E tolerable error, usually of order of local round-off error. Then for Yn+k: can be
taken y~::k~l.
142 Numerical Methods in Computational Engineering
Nevertheless, this method is usually not applied in practice, due to demanding large
number of function f evaluations by step of calculation and, in adaition, this number
is varying from step to step. In order to reduce this number of calculations, number of
iterations in (9.3.5) is fixed. Thus, one takes orily s = 0, 1, ... , m- 1.
In this section we will give program realization of explicit as well as implicit meth-
ods. The presented programs are tested on the example (with h = 0.1).
:y
1
= :r: 2 + :t}, y(1) = 1 (1 S X S 2).
'I'}
. n+l -'I'}
. n-- hf'. n (n = 0, 1, ... ),
c
C==================================================
C RESAVANJE DIFERENCIJALNIH JEDNACINA
C EKSPLICITNIM METODIMA
C==================================================
EXTERNAL FUN
DIMENSION Y(100),Z(100)
F(X)=6.*EXP(X-1.)-X*X-2.*X-2.
OPEN(5,FILE='EULER.OUT')
WRITE (5,10)
10 FORMAT(3X,'RESAVANJE DIFERENCIJAL.JED. ',
1'EKSPLICITNIM METODIMA'//8X,'XN',8X,'YN(I)',
15X,'GRESKA(%)' ,3X,'YN(II)',4X,'GRESKA (%)'/)
XP=1.
XK=2.
H=0.1
Y(1)=1.
CALL EULER (XP,XK,H,Y,FUN)
Z(1)=Y(1)
Z(2)=1.221
z (3) =1. 48836
CALL ADAMS (XP,XK,a,Z,FUN)
N=(XK-XP+0.00001)/H
NN=N+1
X=XP
DO 22 I=1,NN
G1=ABS((Y(I)-F(X))/F(X))*100.
G2=ABS((Z(I)-F(X))/F(X))*100.
WRITE (5,20)X,Y(I),G1,Z(I),G2
22 X=X+H
20 FORMAT (8X,F3.1,2(4X,F9.5,4X,F5.2))
CLOSE(5)
·STOP
END
RESAVANJE DIFERENCIJAL.JED.EKSPLICITNIM METODIMA
· XN . YN(I) GRESKA(%) YN(II) GRESKA (%)
1. 0 1. 00000 . 00 1. 00000 . 00
1.1 1.20000 1.72 1.22100 .00
1.2 1.44100 3.19 1.48836 .00
1.3 1.72910 4.42 1.80883 .02
1.4 2.07101 5.47 2.19028 .03
1.5 2.47411 6.37 . 2.64126 .04
144 Nmn<~rical Md.hods in Computational Engineering
h . . (
Yn+l- Yn = 2Un + fn+d n = 0, 1, ... ),
as conector (with nnmh<~r of iterations n1. = 2) the subroutine PREKOR is written. Main
progra.ru, subprogram, aU< l output results are of fmm:
C===================================================
C RESAVANJE DIF.JED. METODOM PREDIKTOR-KOREKTOR
C===================================================
EXTERNAL FUN
DIMENSION Y(100)
F(X)=6.*EXP(X-1.)-X*X-2.*X-2.
OPEN(5,FILE='PREKOR.OUT')
OPEN(8,FILE='PREKOR.TXT')
WRITE(5,10)
10 FORMAT(8X, 'RESAVANJE DIF. JED. METODOM',
1' PREDIKTOR-KOREKTOR'//15X,'XN' ,13X, 'YN'
2,10X, 'GRESKA(%)'/)
READ(8,5)XP,XK,YP,H
5 FORMAT(4F6.1)
CALL PREKOR(XP,XK,YP,H,Y,FUN)
N=(XK-XP+0.00001)/H
NN=N+1
X=XP
DO 11 I=1,NN
G=ABS((Y(I)-F(X))/F(X))*100.
WRITE(5,15)X,Y(I),G
15 FORMAT(15X,F3.1,8X,F9.5,8X,F5.2)
11 X=X+H
STOP
END
c
c
SUBROUTINE PREKOR(XP,XK,YP,H,Y,FUN)
DIMENSION Y(100)
N=(XK-XP+0.00001)/H
X=XP
Y(1)=YP
DO 10 I=1,N
C PROGNOZIRANJE VREDNOSTI
FXY=FUN (X, Y(I))
YP=Y (I) +H*FXY
C KOREKCIJA VREDNOSTI
DO 20 M=1,2
20 YP=Y(I)+H/2.*(FXY+FUN(X+H,YP))
Y(I+1) =YP
10 X=X+H
RETURN
END
c
L<~ssm1 IX - Ordinary Diff<Tent.ial Equations 145
c
FUNCTION FUN(X,Y)
FUN=X*X+Y
RETURN
END
RESAVANJE DIF. JED. METOD OM PREDIKTOR-KOREKTOR
XN YN GRESKA(%)
1.0 1.00000 .00
1.1 1. 22152 .04
1.2 1.48952 .07
1.3 1.81097 .10
1.4 2.19363 .12
1.5 2.64602 .14
1.6 3.17760 .15
1.7 3.79881 .17
1.8 4.52118 .18
1.9 5.35747 .18
2.0 6.32177 .19
(9.7.1)
Definition 9. 7 .1. Method ( 9.7 .1) is of order p if p is greatest intege1· for which holds
]J-l 1, a D
(9.7.2) <D(:r,y,h) = <Dr(:r,y,h) = E ('i ~ l)!(o:r: + f oyr'f(:r:,y).
i,=O
(9.7.3)
146 Numerical Methods in Computational Engineering
where
m.
<D(x, y, h)= L c.;,ki,
·i~1
k1 = f(:r:,:y),
k.;, = f(.r; +a.;,, y + b.;,h) ('i = 2, ... , m).
·i-1 ·i-1
(9.7.4) a.;,= L CJ.ij, b.;,= I: aijk_i·
.i=1 .i=1
m.
Lc.;, = 1.
·i=1
Unknown coefficients which appear in this method are to be determined from the con-
dition that method has a maximal order. Here, we use the following fact: If <D(:r,, y, h),
developed by degrees of h, can be presented in form
( _!!__
cl +D
i_) .f' --.:z:+
t· ..t·t·v-
- F.
u:r: y
and
,~ + f uy 2 .f = ( ux
( ux ~ + f uy
!? ) ~ )F = G + fu F,
where we put G = f:c.'I: + 2.fJr:y + .f 2 f 1111 • Then from (9.7.2) it follows
(9.7.5)
Consider now only Runge-Kutta methods of order p :S 3. One shows that for obtaining
method of third order it is enough to take rn = 3. In this case, formulas (9.7.3) re<luce
to
By developing of fmiction k2 in Taylor's series in neighborhood of point (:1:, :y), w<~ get
Because of
we have
and
h~ = o.V· 2 + O(h).
By developing of function /;:3 in neighborhood of point (x, :y) and by using last equalities
we have
Finally, by substitnting the obtained expressions for /;: 1 , k 2 , k3 in expression for <D(:r, y, h)
we get
<I>(:r, y, h) =(cl + c2 + c3).f + (c2a2 + c3a3)Fh
2
+ (c2o. 22G + 2c3a2n32F.fu. + c3a 32G ) 2h + 0 (h. 3) .
Last equality enables construction of methods for m. = 1, 2, 3.
Because of
1
<I>(:r, :y, h)- <Dr(:J:, y, h)= (c1 + c2- 1).f + (c2a.2- 2)Fh
+ ~[(3c2a.~- l)G- f:yF]h? + O(h 3),
148 Numerical Methods in Computational Engineering
(9.7.6) and
one obtainH method of second order with one free parameter. Namely, from t>ysteni of
equations (9. 7.6) it follows
2a2- 1
and c1 = - - -
2a2
we condwk that for obtaining of uwtho<ls of third onlcr the satisfactory ccmditious an~
c1 + c 2 + CJ = 1,
1
(;2(}·2 + C:30•.'3 = 2'
(~.7.7) 2 1
(;2(1,,)
-
+ CJ0•3'2 = -:-'
:3
1
(;3(/,2{\'3? = -
' - 6
0
Having fonr <~qnations with six mJknowm;, it follows that, in case m. = 3. we have two-
parametric family of Rm112;<~- K nt.t.a md.hods. Om~ can show that among nwthods of this
family does not Pxist:-; not singl<) md.lw<l with ordPr great<~r than~ three.
Lesson IX - Ordinary Diffc-~n~ut.ial Equations 149
k1 = f(.r.n, Y.n.),
h h )
1.
n:2 =
"(
.f :r:n + J':Un + 3k1'
2h 2h
k3 = f (.T.n + 3' :Un + 3k2),
which is known in bibliography as Helm's method.
For a2 = ~' a3 = 1(=* c1 = c3 = fi,
c2 = ~' a.32 = 2) we get the method
Yn+l - Yn = h( k1 +
6 4k2 + k3),
k1 = f(xn, :Un),
Y.n+l- Yn = h( k1 + 2k2 +
6 2k3 + k4),
k1 = .f(:r:n, Yn),,
h h
(9~ 7.8) k2 = f(:rn + 2' :Yn + 2,k1),
h h
k3 =J(:rn + 2'Yn + 2k2),
k4 = .f(:I:n + h, Yn + hk3),
which is traditionally most used in applications.
From methods of fourth order. it is often used so known Gill's variant, which can
be expressed as the following recursive procedure:
n := 0, Q 0 := 0
(*) Yo := :Un ,
1
k1 := h.f(:r:n, Yo), Y1 :=Yo+ 2(k1- 2Qo),
3 1
Q1 := Qo + -(k1- 2Qo)- -k1,
2 2
Yn+l := Y4,
n := n +1
skip to(*).
Program 9.8.1.
By subroutine EULCAU are realized Euler-Cauchy and improved Euler-Cauchy
method. Parameters in parameter list have the following meaning:
XP - start point of integration interval;
H - integration step;
N - integer, such that N + 1 is lenght of vector Y;
M - integer which ddines way of construction of vector Y. Namely, in vector Y is
stored in turn every M-th value of solution obtained during integration process.
Y- vector containing solutions of length N+1, whereby Y ( 1) is given initial condition
y 0 , Y(2) is value of solution obtained by integration in point XP + M*H, etc.
FUN - name of function subroutine, which defines right-hand side of differential
equation .f (:r, y);
K- integer with values K=1 awl K=2, which governs integration a,cc:onling to Euler-
Cauchy and improved Euler-Cauchy method, respectively.
Subroutine EULCAU is of form:
SUBROUTINE EULCAU(XP,H,N,M,Y,FUN,K)
DIMENSION Y (1)
X=XP
Y1=Y(1)
NN=N+1
DO 10 I=2,NN
DO 20 J=1,M
YO=Y1
Y1=FUN(X,YO)
GO TO (1 , 2) , K
1 Y1=YO+H*FUN(X+0.5*H,Y0+0.5*H*Y1)
GO TO 20
2 Y1=YO+H*(Y1+FUN(X+H,YO+H*Y1))/2.
20 X=X+H
10 Y(I)=Y1
RETURN
END
c
FUNCTION FUN(X,Y)
Lesson IX - On linary Differential Equations lSI
FUN=X*,X+Y
RETURN
END
Main program and outpnt listing are given in further text. As input parameters for
integration w<-~ hav<~ taken H=O .1, N=10, M=1, and in second case H=O. 05, N=10,
M=2. Columns Y1N and Y2N in ontput listing give values for solution of given Cauchy
problem, according to regular and improved Euler-Cauchy method, respectively. Iu
additiou to those columns, in output listing are given columns with corresponding enors
(as relation to exact. solution, expressed in <7<1)
C===================================================
C RESAVANJE DIF. JED. EULER-CAUCHYEVIM
C I POBOLJSANIM METODOM
C===================================================
EXTERNAL FUN
DIMENSION Y(100), 2(100)
F(X)=6.*EXP(X-1.)-X*X-2.*X-2.
OPEN(5,FILE='EULCAU.OUT')
OPEN(8,FILE='EULCAU.IN')
WRITE(5,10)
10 FORMAT(10X,'RESAVANJE DIF.JED.EULER-CAUCHYEVIM'
1 ' I POBOLJSANIM METODOM')
20 READ(8,25,END=99)XP,Y(1),H,N,M
25 FORMAT(3F6.1,2I3)
CALL EULCAU(XP,H,N,M,Y,FUN,1)
Z(1)=Y(1)
CALL EULCAU(XP,H,N,M,Z,FUN,2)
WRITE(5,30)H
30 FORMAT(1H0,30X,'(H=',F6.4,')'//15X,'XN',8X,
1'Y1N',4X,'GRESKA(%)',5X,'Y2N',4X,'GRESKA(%)'/)
NN=N+1
X=XP
DO 11 I=1,NN
G1=ABS((Y(I)-F(X))/F(X))*100.
G2=ABS((Z(I)-F(X))/F(X))*100.
WRITE(5,15)X,Y(I) ,G1,Z(I),G2
15 FORMAT(15X,F3.1,3X,F9.6,2X,F7.5,3X,F9.6,2X,
1 F7.5)
11 X=X+H*M
GO TO 20
99 CLOSE(5)
CLOSE(8)
STOP
END
(H= .0500)
XN Y1N GRESKA(%) Y2N GRESKA(%)
1.0 1.000000 .00000 1.000000 .00000
1.1 1.220824 .01655 1.220888 .01130
1.2 1.487963 .03046 1.488098 .02140
1.3 1.808391 .04213 1.808604 .03034
1.4 2.189811 .05192 2.190111 .03824
1.5 2.640738 .06019 2.641133 .04523
1.6 3.170581 .06721 3.171082 .05143
1.7 3.789740 .07324 3.. 790357 .05696
1.8 4.509705 .07848 4.510451 .06195
1.9 5.343177 .08309 5.344066 .06647
2.0 6.304192 .08719 6.305238 .07061
Program 9.8.2.
According to fonnulas (9.7.8) for standard Runge-Kutta method of fourth degree,
the following subroutine RK4 is written:
SUBROUTINE RK4(XO,YO,H,M,N,YVEK,F)
C=============================================
C METOD RUNGE-KUTTA CETVRTOG REDA
C=============================================
DIMENSION YVEK(1)
T=H/2.
X=XO
Y=YO
DO 20 I=1,N
DO 10 J=1,M
A=F(X,Y)
B=F(X+T,Y+T*A)
C=F(X+T,Y+T*B)
D=F(X+H,Y+H*C)
X=X+H
10 Y=Y+H/6.*(A+2.*B+2.*C+D)
20 YVEK (I) =Y
RETURN
END
C==========================================
C RESAVANJE DIF.JED. METODOM RUGE-KUTTA
C==========================================
EXTERNAL FUN
DIMENSION Y (100)
F(X)=6.*EXP(X-1.)-X*X-2.*X-2.
OPEN(5,FILE='RK4.0UT')
Lc~ssou IX - Onlinary Diff<'n~utial Eqnat.ious
OPEN(8;FILE='RK4.IN')
WRITE(5,10)
10 FORMAT (14X, 'RESAVANJE DIF.JED. METODOM',
1 ' RUNGE-KUTTA')
20 READ (8,5,END=99)XO,YO,H,N,M
5 FORMAT (3F6.1,2I3)
CALL RK4(XO,YO,H,M,N,Y,FUN)
G=O.
WRITE (5,25) H,XO,YO,G
25 FORMAT( 28X,'(H=' ,F6.4,') '//15X, 'XN',13X, 'YN',
110X,'GRESKA(%) '//15X,F3.1,8X,F9.6,7X,F7.5)
X=XO
DO 11 I=1,N
X=X+H*M
G=ABS((Y(I)-F(X))/F(X))*100.
11 WRITE (5,15)X,Y(I),G
15 FORMAT (15X,F3.1,8X,F9.6,7X,F7.5)
GO TO 20
99 CLOSE(5)
CLOSE(8)
STOP
END
c
FUNCTION FUN(X,Y)
FUN=X*X+Y
RETURN
END
Taking H=O. 1, N=10, M=1 tlw following; n~imlts are obtailH~d:
C~=================================================
C RESAVANJE DIF.JED. METODOM RUNGE~KUTTA
C (GILLOVA VARIJANTA)
C============================================~=====
154 Numerical Methods in Computational Engineering
EXTERNAL FUN
REAL*B Y(100),F,FUN,XO,X,H,G
F(X)=6.*DEXP(X-1.)-X*X-2.*X-2.
OPEN(8,FILE= 1 GILL.IN 1 )
OPEN(5,FILE= 1 GILL.OUT 1 )
WRITE(5,10)
10 FORMAT(8X, 1 RESAVANJE DIF.JED.METODOM 1
1 RUNGE-KUTTA (GILLOVA VARIJANTA)
I I ) .
20 READ(8,25,END=99)X,Y(1),H,N,M
25 FORMAT(3F6.1,2I3)
XO=X
CALL GILL(XO,H,N,M,Y,FUN)
WRITE(5,30)H
30 FORMAT(/28X, 1 (H= 1 ,F6.4, 1 ) 1 //15X, 1 XN 1 ,13X, 1 YN 1 ,
1 10X, 1 GRESKA(%) 1 / )
NN=N+1
DO 11 I=1,NN
G=DABS((Y(I)-F(X))/F(X))*100.
WRITE(5,15)X,Y(I),G
15 FORMAT(15X,F3.1,8X,F9.6,6X,D10.3)
11 X=X+H*M
GO TO 20
99 CLOSE(5)
CLOSE(8)
STOP
END
c
c-
SUBROUTINE GILL(XO;H,N,M,Y,FUN)
REAL*B Y(1),H,FUN,XO,YO,Q,K,A,B
B=DSQRT(0.5DO)
Q=O.DO
YO=Y(1)
NN=N+1
DO 10 I=2,NN
DO 20 J=1,M
K=H*FUN(XO,YO)
A=0.5*(K-2.*Q)
YO=YO+A
Q=Q+3.*A-0.5*K
K=H*FUN(XO+H/2. ,YO)
A=(1.-B)*(K-Q)
YO=YO+A
Q=Q+3.*A-(1.-B)*K
K=H*FUN(XO+H/2,YO)
A=(1.+B)*(K-Q)
YO=YO+A
Q=Q+3.*A-(1.+B)*K
K=H*FUN(XO+H,YO)
A=(K-2.*0)/6.
YO=YO+A
Q=Q+3.*A-K/2.
20 XO=XO+H
10 Y(I)=YO
RETURN
END -
c
FUNCTION FUN(X,Y)
REAL*B FUN,X,Y
Lesson IX - Ordinary Differential Equations 155
FUN=X*X+Y
RETURN
END
(9.9.1) y;, = f;(:r:; :U1, · · ·, Yp), y.i(:r:o) = Y·iO ('i = 1, ... ,p).
I
where
y= Yl1 [ YlO
Y2 __, Y2o
. , Yo= . ,
[
~P . y~JO
It is of our interest the solution of Cauchy problem for differential equations of higher
order. Note, nevertheless, that this problem can be reduced to previous one. Namely,
let be given the differential equation of order p
I
zp-1 = z,in
Z~ 1 = f(x; z1, zz, ... , zp),
Laiffn+i = h LPd:+i,
·i=O
where .t:.+i = .{(:J;n+·i., Yn+·i.), and then as such can be applied to solution of Cauchy
___ problem (9.9.2).
· ~ " Also, the Runge-Kutta methods for solution of Cauchy problem (9.9.2) are of form
where rn
k1 = .((:r, m,
k.;. = .((:r: + o,)l., if+ b.;.h)
1:-1 ·i-1
a.i = L (Y.i,j, b.;.= L (Y.;,jkj (1: = 2, ... , m).
j=l .i=1
All analysis given in previous sections can formally be translated to noted vector
methods.
As an example, realize standard Runge-Kutta method of forth order (9.7.8) for
solving of system of two differential equations
SUBROUTINE RKS(XP,XKRAJ,YP,ZP,H,N,YY,ZZ)
REAL KY1,KY2,KY3,KY4,KZ1,KZ2,KZ3,KZ4
DIMENSION YY(1),ZZ(1)
K=(XKRAJ-XP)/(H*FLOAT(N))
Lesson IX - Ordinary Differential Equations 157
N1=N+1 ~
X=XP
Y=YP
Z=ZP
T=H/2.
YY(1)=Y
ZZ (1) =Z
DO 6 I=2,N1
DO 7 J=1,K
KY1=FUN(1,X,Y,Z)
KZ1=FUN(2,X,Y,Z)
KY2=FUN(1,X+T,Y+T*KY1,Z+T*KZ1)
KZ2=FUN(2,X+T,Y+T*KY1,Z+T*KZ1)
KY3=FUN(1,X+T,Y+T*KY2,Z+T*KZ2)
KZ3=FUN(2,X+T,Y+T*KY2,Z+T*KZ2)
KY4=FUN(1,X+H,Y+H*KY3,Z+H*KZ3)
KZ4=FUN(2,X+H,Y+H*KY3,Z+H*KZ3)
Y=Y+H*(KY1+2.*(KY2+KY3)+KY4)/6.
Z=Z+H*(KZ1+2.*(KZ2+KZ3)+KZ4)/6.
7 X=X+H
YY(I) =Y
6 ZZ(I)=Z
RETURN
END
Using this subroutine we solved system of equations
under conditions y(1) = 1/3 and z(1) = 1 on segment [1, 2.5] taking for integration step
h = 0.01, and printing on exit :c with step 0.1 and corresponding values of y, vr, z, zr,
where YT and zr are exact solutions of this system, given with
72 6
and ZT = .
7-x 2
C====================================================
C RESAVANJE SISTEMA DIF. JED. METODOM RUNGE-KUTTA
C====================================================
DIMENSION YT(16),ZT(16),YY(16),ZZ(16),X(16)
YEG(P)=72./(7.-P*P)**3
ZEG(P)=6./(7.-P*P)
OPEN(8,FILE='RKS.IN')
OPEN(5,FILE='RKS.OUT') _
'READ(8,15)N,XP,YP,ZP,XKRAJ
15 FORMAT(I2,4F3.1)
YP=YP/3.
H=0.1
Nl=N+1
DO 5 I=1, N1
X(I)=XP+H*FLOAT(I-1)
YT(I)=YEG(X(I))
5 ZT(I)=ZEG(X(I))
WRITE(5,22)
H=0.01
158 Numerical Methods in Computational Engineering
CALL RKS(XP,XKRAJ,YP,ZP,H,N,YY,ZZ)
WRITE(5,18)H,(X(I1,YY(I),YT(I),ZZ(I),ZT(I),
1 I=1,N1)
18 FORMAT(//7X,'KORAK INTEGRACIJE H=',F6.3//7X,
1'Xr,11X,'Y',10X~'TACN0',11X, 'Z'~10X,'ZTACNO'//
2(F10.2,4F14.7))
22 FORMAT(1H1,9X, 'RESAVANJE SISTEMA SIMULTANIH',
1' DIFERENCIJALNIH JEDNACINA'//33X,'Y''=XYZ'//
1 33X, 'Z' '=XY /Z')
CLOSE(5)
CLOSE(8)
STOP
END
c
FUNCTION FUN(J,X,Y,Z)
GO TO (50,60),J
50 FUN=X*Y*Z
RETURN
60 FUN=X*Y/Z
RETURN
END
In this section we will point out to <lifference method for solution boundary problem
'lj
.n+1 - 2·t;.
,1/.
+ .n.-1+
'lj. 'lj. - .'lj n-1+
.n+1 . .
(9.10.2) 2 1)n ?/ qn.'lj n -- 1
. n (n=1, ... ,N),
I/, ~ /,
L<~sson IX - Ordinary Differential Equations
2
(9.10.3) O.n:tfn.-1 + b";t}n + Cn.Jln+l = h fn (n = 1, ... ,N).
l 0
,]
cl
J/2 ~ h'h;A"t
kh 1 l(},2
b, b2 c2 ()
:1]= l'll. 1' rl = . ' T= .
DIMENSION A(100),B(100),C(100),D(100)
C===================================================
C MATRICNA FAKTORIZACIJA ZA RESAVANJE
C KONTURNIH PROBLEMA KOD LINEARNIH
C DIFERENCIJALNIH JEDNACINA II REDA
C Y''+ P(X)Y'+ Q(X)Y = F(X)
C Y(DG) = YA, Y(GG} = YB .
c ==================================================
OPEN(S,FILE='KONTUR.IN')
OPEN(7,FILE='KONTUR.OUT')
READ(8,5) DG,YA,GG,YB
5 FORMAT(4F10.5)
C UCITAVANJE BROJA MEDJUTACAKA
10 WRITE(*,14)
14 FORMAT(1X,'UNETI. BROJ MEDJUTACAKA'
1' U FORMATU I2'/ 5X,'(ZA N=O => KRAJ)')
READ(5,15) N
15 FORMAT (I2)
N1=N+1
IF(N.EQ.O) GO TO 60
H=(GG-DG)/FLOAT(N1)
· HH=H*H
X=DG
DO 20 I=1,N
X=X+H
Y=H/2.*PQF(X,1)
A(I)=1. -Y
C(I)='=1.+Y
B(I)=HH*PQF(X,2)-2.
20 D(I)=HH*PQF(X,3)
D(1)=D(1)-YA*A(1)
D(N)=D(N)-YB*C(N)
160 Numerical Methods in Computational Engineering
C(1)=C(1)/B(1)
DO 25 I=2,N
B(I)=B(I)-A(I}*C(I-1)
25 C(I)=C(I)/B(I)
D(1}=D(1)/B(1)
DO 30 I=2,N
30 D(I)=(D(I)~A(I)*D(I-1))/B(I)
NM=N-1
DO 35 I=1,NM
J=NM-I+1
35 D(J)=D(J)-C(J)*D(J+1)
WRITE(7,40)N,(I,I=1,N1)
40 FORMAT(///5X,'BROJ MEDJUTACAKA N='
1 ,I3///5X, 'I' ,6X, '0' ,9I10)
DO 45 I=1,N
C(I)=DG+H*FLOAT(I)
45 B(I)=PQF(C(I),4)
WRITE(7,50)DG,(C(I),I=1,N),GG
WRITE(7,55)YA,(D(I),I=1,N),YB
WRITE(7,65)YA,(B(I),I=1,N),YB
50 FORMAT(/5X,'X(I)' ,10(F6.2,4X))
55 FORMAT(/5X,'Y(I)',10F10.6)
65 FORMAT(/5X, 'YEGZ',10F10.6)
GO TO 10
60 CLOSE(7)
CLOSE(8)
STOP
END
Note that this program is so realized that number of inner points N is read on
input. In case when N = 0 program ends. Also, in program is foreseen tabulating
of exact solution in observing points, as control. It is clear that last has meaning for
scholastic examples when solution is known. So, for example, for boundary problem
FUNCTION PQF(X,M)
GO TO (10,20,30,40),M
10 PQF=-2.*X
RETURN
20 PQF=-2.
RETURN
30 PQF=-4.*X
RETURN
40 PQF=X+EXP(X*X)
RETURN
END
BROJ MEDJUTACAKA N= 4
I 0 1 2 3 4 5
X(I) .00 . 20 .40 .60 .80 1.00
Y(I) 1.000000 1.243014 1.576530 2.035572 2.695769 3.711828
YEGZ 1.000000 1.240811 1.573511 2.033329 2.696481 ~.711828
Lesson IX - Ordinary Differential Equations 1o1
LECTURES
LESSON X
10.1. Introduction
Partial diffen~utial <~quations (PDEs) arise in all fields of engineering and sci<~nu~.
Most real physical proc<~sscs an~ gov<~rned by partial cliffereutial equations. Iu umuy
cases, simplifying approximations arc made to reduce the governing PDEs to ordinary
cliffen~utial equatious (ODEs) or even algebraic equations. However, because of the <·~ver
increasing requirem<~uts for mon~ accurate modelling of physical processes, engineers and
sc:ieutists are mor<~ anclmcm~ rcquin~d to solve the actual PDEs that govern the physical
problem being investigated. Physical problems are governed by many different PDEs. A
few problems are gov<~rued by a single first-order PDE. Numerous problems are governed
by a system of first order PDEs. Some problems are governed by a singl<~ second-cmler
PDE, and num<~rons prohl<~ms are governed by a system of second-order PDEs. A few
problems are governed by fourth order PDEs. Tlw two most frequent types of physical
problems described by PDEs are equilibrium and propagation problems.
The classification of PDEs is most easily <~xplained for a single second order linear
PDE of form
D2 ·n !.l')
02 ;) D
B C v, D u·u E v,
(10.1.1) A-+
2
D:r:
u~'.IL
--+
D:r:Uy
/-+
Dy 2
-.
D:r:
+ -Dy+ } ' i' l = G .'
where A., B, C, D, E, F. G an~ giv<~u fnuctions which are continuous in area S of plaue
:r:Oy. The areaS is nsnally <l<~fined as inside part of some curve r. Of course, tlw area
S can be as finite as well as infiuite. Typical problem is finding two times continu-
ous differentiable solutim1 (:~:, y) ---1 u(:r:, y) which satisfies equation ( 10 .1.1) and some
conditions on curve (contour) r.
Linear PDEs of second order can be classified as eliptic, parabolic and hyperbolic,
depending on the sign of the discriminant B 2 - 4AC in given area S, as follows:
1° B 2 . - 4AC < 0 Elliptic
2° B 2 - 4AC = 0 Parabolic
3° B 2 - 4AC < 0 Hyperbolic
The terminology elliptic, para:bolic, and hyperbolic chosen to classify PDEs n~flects
the analogy betwe<'~n the form of the discriminant, B 2 - 4AC, for PDEs and the form
of the disc:riminaut, B 2 - 4AC . which classifies conic sections, described by the general
seconcl-orcl0r algebraic equation
where we for neg<1.t.ive, ~ero, and positive value of discriminant have ellipse, parabola,
and hyperbola, n~spcctively. It is <~asy to check that the Laplace equation
D2 v.
-·+-· -=0
o2 v.
(10.1.2)
Dx 2 Dy 2 '
Hi3
1{)4 l~umencallVletllocis m Computational .l:!_;ngineering
~) >)2
U7L 2U '/1,
(10.1.3) ~ - a >), • = 0,
ut u.L 2
(10.1.4)
of hyperbolic type. In this chapter we will show one way for numerical solution of PDEs,
for Laplace and wave equation by grid method. In the similar way can be solve heat
conduction equation, what we leave to the reader.
(10.2.1) Lu= f
and let in area D, which is hounded by curve f(D = int; r), look for such its solution
on curve r that satisfi~~s given boundary condition
In i.Lpplication of grid method, at first, one should chose discrete set of points Dh,
which belongs to area D (= D U r), called grid. Most frequently, in applications is for
grid taken family of parallel straight lines :r:.i = .1: 0 +ih, Y,j = Yo+jl ('i,j = 0, ±1, ±2, ... ).
Intersection points of these families are called nodes of grid, and h and l are steps of
grid. Two nodes of grid are called neighbored if the distance between them along :r: and
y axes is one step only. If all four neighbor nodes of some node belong to area D, then
this node is called interior or inner: in counterpart node of grid Dh is called boundary
node. In addition to rectangul<tr grids, in practice are also used other grid shapes.
Grid method consists of approximation of equations (10.2.1) and (10.2.2) using cor-
responding difference equations. Namely, we can approximate operator L by difference
operator very simple, l>y snbstituting derivative with corresponding differences in inner
nodes of grid. Then~by a.n~ nse<l tlw following formulas
. Let the ~:losest /\)oint fmm contom f to boundary uocle A be point B and let their
chstance be r~ (see F1g. 10.2.1).
!I'
a/ A c
~~
~h .....
I
x.
Figure 10.2.1
Based on function vahws in points B and C, we get by linear interpolation
which on the contour of square D = { (:r:, y) I0 < :r: < 1, 0 < y < 1} fulfills given condition
.. . 1
n(:r, y) = \J!(:r:, y) ((:r, y) E f). Let's chose the grid in Dh at which is l = h = N _ , so
1
that grid nods are points (:.r:.;,, y.;.) _:_ (('i ~ 1)h, (.j- 1)1) (i,j = 1, ... , N). The standard
difference approximation sdwme for solving Laplace equation is of form
1
~·u,i.+1 1·
h, £, '·
+ v,'i.-1 1· + 'IJ.i. 1·-1
'· ).
- 4v,;,. 1· = 0,
).
or
1
'1/,i.,j = 4('U.·i.,j+l + 'lJ.-i.,j-1 + 'U,'i-1,.i + 'U,·i+1,j)·
Taking i, _j = 2, ... , N -1 inlast equality we get the system of (N- 2) 2 linear equation.s.
For solving this system usually is used method of simple iterations, or, even more
simpler, Gauss-Seidel method.
The corresponding program for solving problem in consideration is of form
166 Numerical Methods in Computational Engineering
C====================================;=~==========
C RESAVANJE LAPLACE-OVE JEDNACINA
C=================================================
DIMENSION U(25,25)
OPE.N(8,FILE='LAPLACE.IN')
OPEN(5,FILE='LAPLACE.OUT')
READ(8,4)N
4 FORMAT (I2)
M=N-1
READ(8,1)(U(1,J),J=1,N),(U(N,J),J=1,N),
1(U(I,1),I=2,M),(U(I,N),I=2,M)
1 FORMAT(8F10.0)
DO 10 I=2,M
DO 10 J=2,M
10 U(I,J)=O.
IMAX=O
20 WRITE(*,5)
5 FORMAT(5X,'UNETI MAKSIMALNI BROJ ITERACIJA'/
110X, '(ZA MAX=O => KRAJ)')
READ(*,4)MAX .
IF(MAX.EQ.O) GOTO 100
DO 30 ITER=1,MAX
DO 30 I=2,M
DO 30 J=2,M
30 U(I,J)=(U(I,J+1)+U(I,J-1)+U(I-1,J)+U(I+1,J))/4.
IMAX=IMAX+MAX
WRITE(5,65) IMAX,(J,J=1,N)
65 FORMAT(//26X,'BROJ ITERACIJA JE',I3//17X~
14(5X,'J=',I2)) .
DO 60 I=1,N
60 WRITE(5,66) I,(U(I,J),J=1,N)
66 FORMAT(13X, 'I =',I2,6F10.4)
GO TO 20
100 CLOSE(8)
CLOSE(5)
STOP
END
For Rolving sy~t<~m of lill(~ar equations WP n~ed Gauss-S<~iclel method with initial
conditions 'IL;.,:j = 0 ('i.,j = 2 .... ,N -1), wlwn~by one can control number of iterations
on input. For N=4 and homHlary conditions
BROJ ITERACIJA JE 2
J= 1 J= 2 J= 3 J= 4
I = 1 .0000 30.0000 60.0000 90.0000
I = 2 60.0000 47.8125 53.9063 60.0000
I = 3 120.0000 83.9063 56.9531 30.0000
I = 4 180.0000 120.0000 60.0000 .0000
BROJ ITERACIJA JE 7
J= 1 J= 2 J= 3 J= 4
I = 1 .0000 30.0000 60.0000 90.0000
L<~ssml X - Partial Diff·(~n~ntial Eqnat.ions Hi7
[)2'11.
(10.4.1)
(J:r:2 a 2 . o:r 2
1
(10.4.4) '/L.i+LJ.
..
- 2n.; ).
J. + '11..i.~1 '·J. ,,.. . (·n.; '·1.+1- 2n.; ' j
= -2 + n.; 1·-1),
),
where T = o.t_ (h ancll arc steps along :rand taxes respectively), and '/l..i.,.i ""'v.(:c.;, tj)·
Based on first equality in (10.4.2) we have
By introducing fictive layer j = -1, secon<l initial condition in (10.4.2) can simple be
approximated using
'/J.,i 1 - 'LL·i -1
(10.4.6) v.i(:r:.i, 0) = q(:r.i) = .rh ~ ., z ., ·
2
1
ni,l = l.rh + fi + ?T 2 (fi+1- 2fi + fi-1),
""'
Le.
(10.4.7)
c 1 1
(10.4.8) 'll.i
'·
j+1 = -:)"(·u
'f'.:.J
..i+1 .; + '1/..i.-1 .;) -
1,/ ),/
'11..;. ·i-1
)./ '
+ 2(-2
,,.
- l)v.i 7·.
'·
If w<~ put h = b/N and :1:.; = ('i -l)h (-i = 1., 2, ... , N + 1), clue to hounclary
conditions (10.4.3) we have
(10.4.9)
where j = 0, 1, .... For ddcnnining of solution inside of rectangle P = {(:r:, t)(O < :r: <
b, 0 < t < Tmn:r:}, uwxinml valn<c~ of inclex j is int<~ger part of Tm.n:c/l i.e . .J.rn.o.:r: = M =
[Tnw.:r:/l].
Bas<~d on <~qualities (10.4.G), (10.4.7), (10.4.8), (10.4.9) the approximate solutious
of giveu problem in ~ri<l uoclcs of rectangle P, arc tlimple to obtain. This algorithm is
coclcd in the followin~ program.
C==================================================
C RESAVANJE PARCIJALNE DIF. JED. HIPERBOLICNOG TIPA
C==================================================
DIMENSION U(3,9) ,
OPEN(8,FILE='TALAS.IN')
OPEN(5,FILE='TALAS.OUT')
READ (8,5)N,A,B,R,TMAX
5 FORMAT(I2,4F5.2)
N1=N+1
WRITE (5,10) (I,I=1,N1)
10 FORMAT(10X,1HJ,<N+1>(4X,'U(' ,I1,' ,J) ')/)
H=B/FLOAT(N)
EL=R*H/A
M=TMAX/EL
T=O.
DO 15 K=1,2
U(K,1)=FF(T,B,3)
U(K,N1)=FF(T,B,4)
15 T=T+EL
X=O.
R2=R*R
DO 20 I=2,N
X=X+H
U(1,I)=FF(X,B,1)
20 U(2,I)=EL*FF(X,B,2)+(1.-R)*U(1,I)
DO 25 I=2,N
25 U(2,I)=U(2,I)+R2/2.*(U(1,I+1)+U(1,I-1))
J=O
30 WRITE(5,35)J,(U(1,I),I=1,N1)
35 FORMAT(7X,I5,<N1>F10.4)
L!~ssou X - Partial Difh~n~ntial Equations I L()!)
IF(J.EQ.M)GO TO 50
J=J+1
U(3,1)=FF(T,B,3)
U(3,N1)=FF(T,B,4)
DO 40 I=2,N
40 U(3,I)=(U(2,I+1)+U(2,I-1))/R2-U(1,I)-2.
1*(1./R2-1.)*U(2,I)
T=T+EL
DO 45 I=1,N1
U(1,I)=U(2,I)
45 U(2,I)=U(3,I)
GO TO 30
50 CLOSE(5)
CLOSE(5)
STOP
END
. . Note that tlw vahws of solution iu thn~e successive layers .i - 1, j, _j + 1, a.re stored
m first, S<~cowl, awl thinl row of nw.trix U, respectively..
. Functions .f, q. <D, \]1 an~ d<~fiw~d hy hmction subroutine FF fm 1=1, 2, 3, 4, respPc-
tlvely. .
In considered case for a= 2, !J = 4, Tma.:r: = 6, f(:E) = :r(4- :r), g(:r) = 0, <D(t) = 0,
w(t) = 0, N = 4, awl,.= 1, subroutim~ FF and corresponding output listing with rc~mlt
have the following f(mn: -
FUNCTION FF(X,B,I)
GO T0(10,20,30,40),I
10 FF=X*(B-X)
RETURN
20 FF=O.
RETURN
30 FF=O.
RETURN
40 FF=O.
RETURN
END
[4] Milovauovit, G .\r. aud D. ion li<~vit. D.i ,R., Program·iranje numer·ick·ih metoda
na FORTRAN _jez·iku. Iust.it.nt za doknuwutacijn zastitc ua radn ''Edvard
Km-dd.(. Nis, 19~1 (S<~rbiau).
[S] I\!Iilovauovit, G.V .. Numer·ical Analysis II. Nan(:ua lmjiga. B~ograd. 1988 (S<~r
biau).
[G] Sto<T ..L awl Bnlirscl1. R., Introduct·ion to N·umerical Analysis, SpriHg<T.
N<~w York. 1~Jo0.
[7] Press, W.H .. F'l;mwT'>'- B.P., T<~nkolsky, S.A., ctll<l Vc~ttediug, vV.T., Nv.men:cal
Recepies - The A'l"i of Scientific Comput·ing. Cambridge University Pn~ss,
Elo9.
[8] lVIilovauovit, G.V. i\Jl(l Kova!:<~vit, M.A., Zb·irka resenih zadataka iz n'll.-
mericke ana.Z.iz('.. Nanhm ku.ii.u;a, B<~ogra(L 198S. (S(~rbiau).
[0] Ralstou.A., A P.iTst Co·u.rse in Numerical Analys·is.
McGraw-Hill, New Ymk, EJG:J.
[10] Hilddm\.lHL F.I3 .. Introduct·ion to N·nmerical Analysis.
Mc.Graw-Hill. New York. 197L1.
[11] Actou, F.S., Numer·ical Mdhods That vVork (conecterl e<litiou). Matlwmat.-
ical Associatiou of Auwrica, Washingtou, D.C., 1990.
[12] Abramowit:;,. M .. alHl St.cgm1, I.A., Handbook of Mathemat·ical Functions.
National Bmean of SL1.ndanls, Applif~d Matlwmatic:s Series, Washingtou, 19Cd
(n~printcd 19()8 hy Dover Pnhlicatiom;, N(~W York).
[1:3] Rice, J.R., Numer·ical Methods, Software, and Analysis. McGraw-Hill, New
York, 1 ~)83.
[14] Forsyth(~, G.E., Malcolm, M.A., and Moler, C.B., Comp·uter Methods for
Mathemat·ical Computat·ions. Englewood Cliffs, Prentice-Hall, NJ, 1977..
[1S] Kahauer, D., Mol(~r. C., aml Nash, S., HJ89, Numer·ical Methods and Soft-
ware. Englewood Clift"s, Preutiu~ HalL N.T, Em9.
[Hi] Hamming, R.vV., Nmner·ical Methods for Engineers and Sc'ient·ists. Dov(~L
New York, 1%2 (u~priut.ed lV~G).
[17] F'<~rzigcr, .T.H., Nv.mer·ica.l Methods for Eng-ineering Applicab:ons. St.anfoul
Uuivc-~rsit.y, John Will(~y & Sous, Inc., N<~w York, 1998
[18] Pearson, C.E .. Nv,mcn:cal A1cthods ·in Engineering and Science. University
of \iV;t!ihingtmL Van Nostraw l H.cinhold Company, New Ymk, 1D8G.
[19] St<~plwnson. G. awl n.arlmorc. P.M., Advanced Mathematical Methods for
Eng·ineer·ing and Science St'ILdents. Imp(~rial Collegf~ Loudon, University Col-
lege. Loudon Cmubri(lg<~ Uuiv. Press, 1999
[20] Anws, W.F., Numer·ico.l Methods for Partial Different·ial Eq·uations. 2rtd
(-~d. Acad(~lllic Press . N(~W York. 1977.
[21] Richtllly<~r. RD. awl l\!Imtou, K.vV ... D·i.fference Methods for In-itial Val·ue
Problem,s. 2n(l cd .. vVilcy-Iutcl·scienc:c, New York, 1907.
[22] Mitchell, A.R, and Griiiiths, D.F., The Finde Difference Method in Partial
D~fferent·ial Equations., \!\Tiley, New York, 1980.
[23] IJvfSL Math/Library Users Man·ual, IMSL Inc., 2500 City West Boulevard,
Houston TX 77042
[24] NAG Fortran Library. N1mt<Tic:al Algorithms Group, 2S6 Banbury Road. Ox-
ford OX27DE.. U.K., Chapter F02.
/
Facnlty of Civil Eugiw~<~riug Faculty of Civil Engineering ancl Archit<~ctme
BelgraclP " Nis
Master Study Doctoral Study
COMPUTATIONAL ENGINEERING
LECTURES
LESSON XI
11.1. Introduction
In spite the fact that integral eqnations an~ i1hnost never treated in numerical <mal-
ysis textbook1->, there is a large awl growing literature on their numerical solution. Om~
reason for the she<~r vohmw of this activity is that there are many different kinds of
equations, each with many different possibk pitfallt~. Often many different algorithms
have been propm;<~<l to deal with a single case. There is a close corresponcknce between
linear integral equations, which specify linear, integral relation1-> among functions in au
infinite-dimensional fum:timJ space, awl plain old linear equations, which specify c-malo-
gous rdatiom; among V<-~ctors in a finite-clinwnsional vector space. This correspondence
lies at the heart of most cmuputational algorithms, as we will se<~ in program n~alization
of their nmm~rical solntiou.
The equation
I!
is callecl Fredholm integntl equation of first kind. This equation can be written m
analogous fonn, as matr:ix equation
K·:l}=-/
173
174 Numerical Methods in Computational Engineering
·and K- 1 is matrix inverse. Both equation are solvable when function f and [are
nonzero, respectively (the homogeneous case with f=O is almost never useful), and
K(K) is invertible.
The analogous matrix f()lm of Fredholm equation of second kind (11.1.1) is
1
( K - -I) · y =
~ f
~-.
). ).
Again, iff or [is zero, then theequation is said to be homogem~ous. If the kernel K(x, t)
is bounded, then, like in matrix form, the equation (11.1.1) has the property that its
homogeneous form has solutions for at most a denumerably infinite set >. . . :. . An, n =
1, 2 ... , the eigenvalues. The corresponding solutions Yn (x) are the eigenfunctions. The
eigenvalues are reaLif the kernel is symmetric. In the inhomogeneous case of nonzero f
or [, both equations are solvable except when ). or 1/). is an eigenvalue - because the
integral operator (or matrix) is singular then. In integral equations this dichotomy is
called the Fredholm alternative. ·
Fredholm equations of the first kind are often extremely ill-conditioned. Applying
the kernel to a function is generally a smoothing operation, so the solution, which
requires inverting the operator, will be extremely Hensitive to small changes or errors in
the input. Sinoothing often actually loses information, and there is no way to get it back
in an inverse operation. Specialized methods have been developed for such equations,
which are often called inverse problems. The idea is that method must augment the
information given with some prior knowledge of the nature of the solution. This prior
knowledge is then used, in some way, to restore lost information.
Volterra integral equations of first and second kind are of forms
.7:
and ~r;
respectively. Volterra equations an~ a special case of Fredholm equations with K(:r, t) =
0 fort > :r;. Chopping off the unnecessary part of the integration, Volterra equations are
written in a form where the upper limit of integration is the independent variable :r. The
analogous matrix form of Volterra equation of first kind (written out in components) is
h:
wherefrom we see that Voltcna equation corresponds to a matrix K that is lower (left)
triangular. As we already know, such matrix <~quations are trivially soluble by forward
substitution. Techniques for solving Voltmra equations are similarly straightforward.
When experimental measm·<~nH~nt noise <loes not dominate, Volterra equations of the
first kind tend not to be ill-conditioned. The upper limit to !he integral introduces a
sharp step that conveniently spoils any smoothing properties of the kerw~l. The matrix
analog of Voltena (~quation of the second kind is
(K - I) . :t7 = .F,
with K lower triangular matrix. The reason there is no >. in these equations is that in
inhomogeneous cas<~ (nonzero f) it can be absorb<~d into K or K, while in the homog<~
neous case (.f = 0), there is a t.lH~orem that Vol ten a equations of the second kind with
hounclecl kerw~ls have uo <~ig<~nvalnes with sqnare-integrahle eigenfunctions.
Lesson XI - Integral Eqnations 175
We have c:ousicterecl only the case of linear integral equations. The integrand in
a nonlinear version of given equations of first kind (Fredholm and Volterra) would
be K(:r, t, y(t)) iustead of K(:r:, t)y(t), and a nonlinear versions of equations of second
kind would have an integrand K (:1:, t, y(x), y( t)). Nonlinear Fredholm equations are
considerably more complicated than their linear counterparts. Fortunately, they clo
not occur as frequently in practice. By contrast, solving nonlinear Volterra equations
usually involves only a slight. modification of the algorithm for linear equations. Almost
all methods for solving iutegral equations numerically make use of quadrature rules,
frequently Gaussian quaclratures.
whereby is taken :uo = f(:r). Namdy, if we define sequence of functions {vd by using
I!
One can show that sequence :Un converges to exact solution of equation (11.1.1) if fulfilled
1
the condition 1>.1 < M(b _a) where
b. n
(11.3.1) J
.
F(:c) rl:r: = L AjF(xj) + Rn(F),
j=l
1/.
where abscissas .T 1 , ... , Xn are from [a, b], Aj are weight coefficients not dependi~g on
F, and Rn(F) corresponding remainder tern1. .
· If we put in (11.1.1) successively x = X-t (i = 1, ... , n), we obtam
'/1.
where F.;.(t) = K(:r:.iyt) y(t) (-i = 1, ... , n.). By discarding members R.~~,(Fi) (1:
1, ... , n), based on (11.3.2) we get system of linear equations
n
(11.3.3) Yi- A L AJ<i.i:IJ.i =.f.,: (i = 1, ... , n),
·i=l
where we put :th = y(:r:.,:), .{.,: = f(:~:.,;), Ki.i K(:D.,:, :J:.i)· System (11.3.3) can also be
l[
given in matrix form
1- AA1K11 -AA2K12
. -AA1K21 1- AA2K22
-AA.,,.Kln :1J1]
-AAnK2n h
[
:Y2
.ill
[)n .
-AAlKnl 1- AA.nKnn Y:n
By solving the obtained system of linear equations in y 1 , ... , :IJn, the approximative
solution of equation (11.1.1) can he presented in the form
b-a
h = -- n = 2m. + 1, :r:.,: = a + ('i - 1) h ('i = 1, ... , n.) ,
2nr '
2h
A3 = As = ... = A2w.-1 = 3·
Fm solving system of linear <~qnat.ions (11.3.:3) we will use subroutines LRFAK and RSTS
The coll<~ of subrontilH~s and d<~scription of subrontirws parameters are given in Chapter
2.
In snbrontiw~ FRED is formed system of <xpmtions (11.3.~:q. Parmnet<~rs in snhron-
tinc parameter list <l.l'!~ of following nH~aning:
X - vector of a.bsciksas of quadrature forumla;
A - vector of weight codiici!~uts of qnadratnrc formula;
FK - name of hmction snlmmtim~ with hmction f and kemel K;
PL - pa.ntmd.!~r A:
C - matrix of syst.1~m ( 11.:3.:3), stored as vcd.m iu columuwise way (cohmm by
cohmm):
F- vectm of fn~~~ uwmlH~rs in system of ~~qnat.ion (11.3.3).
Sulmmtine code is of fmm:
Lesson XI - Integral Eqnations 177
SUBRObTINE FRED(X,A,N,FK,PL,C,F)
DIMENSION X(1), A(1),C(1).,F(1)
IND=-N
DO 15 J=1,N
IND=IND+N
DO 10 I=1,N
IJ=IND+I
C(IJ)=-PL*A(J)*FK(X(I),X(J),2)
IF(I-1)10,5,10
5 C(IJ)=1+C(IJ)
10 CONTINUE
15 F(J)=FK(X(J),X(I),1)
RETURN
END
Function subroutine FK has a following parameters in the parameter list:
X and T - values of arguments :r: and t respectively.
M- integer which governs calculation of function f (M=1) and kernel K (M=2) for
given values of arguments. Subroutine code is of form:
FUNCTION FK(X,T,M)
GO TO (10,20), M
10 FK=EXP(X)
RETURN
20 FK=X*EXP(X*T)
RETURN
END
Main program is orga.nized 1n such a way that at first in FRED is formed system of
equation, and then is matrix of system factorized by subroutine LRF AK, what enables
solving of system of equations by subroutine RSTS.
Taking as an exampl<~ equation
1 .
and M=1, 2 (N=3, 5), the corresponding results are obtained and presented below main
program code. Note that exact solution of given equation is y(x) = 1.
EXTERNAL FK
DIMENSION X(10) , A(10) , C(iOO) , B(10) , IP (9)
OPEN(8,FILE='FREI).IN')
OPEN(5,FILE='FRED.OUT')
READ(8,5)PL,DG,GG
5 FORMAT(3F5.0)
10 READ(8,15,END=60) M
15 FORMAT(I2)
N=2*M+1
H=(GG-DG)/(2.*FLOAT(M))
X(1) =DG
DO 20 I=2,N
20 X(I)=X(I-1)+H
Q=H/3.
A(l) =Q
A(N)=Q
DO 25 I=1,M
178 Numerical Methods in Computational Engineering
25 A(2*I)=4.*Q
DO 30 I=2,M
30 A(2*I-1)=2.*Q
CALL FRED(X,A,N,FK,PL,C,B)
CALL LRFAK(C,N,IP,DET,KB)
IF(KB) 35,40,35
35 WRITE(5,45)
45 FORMAT(1HO,'MATRICA SISTEMA SINGULARNA'//)
GO TO 60
40 CALL RSTS(C,N,IP,B)
WRITE(5,50)(B(I),I=1,N)
50 FORMAT(/5X, 'RESENJE'//(10F10.5))
GO TO 10
60 CLOSE(5)
CLOSE(8)
STOP
END
RESENJE
1.00000 0.94328 0.79472
RESENJE
1.00000 1.00000 1.00000 1.00000 0.99998
[Hi] Kahmwr, D., Niokr, C., aud Nash, S., 1~8!), Numerical Methods and Soft-
ware. Euglc~wood Cliffs, Pn~llticc~ Hall. N.J. 1~8~.
[17] IMSL Math/L-ibrary Users Manual. Il\IISL Inc., 2500 City Wc~st. Bonlc~vard,
Houston TX 77042
[18] NAG Fortran L·ibrary, Nnuwrical Algorithms Group, 2SG Baulmry Road. Ox-
ford OX27DE, U.K., Chapter F02.