0% found this document useful (0 votes)
6 views

Week

Uploaded by

ldra0004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Week

Uploaded by

ldra0004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

MONASH

ENGINEERING
MEC3456

Video Content Slides – Week 01


Taylor Series
MEC3456/MAE3456

Taylor series

§ The Taylor series is


¶f 1 2 ¶2 f 1 3 ¶3 f 1 4 ¶4 f
f ( x0 + d ) = f ( x0 ) + d ( x0 ) + d ( x0 ) + d ( x0 ) + d ( x0 ) + !
¶x 2! ¶x 2
3! ¶x 3
4! ¶x 4

§ You should REMEMBER this

3
MEC3456/MAE3456

Multi-dimensional Taylor series

§ For functions of more than one variable (e.g. two)


é ¶f
f ( x + a , y + b ) = f ( x, y ) + êa (x, y ) + b ¶f (x, y )ùú
ë ¶x ¶y û
éa 2 ¶ 2 f ¶2 f b 2 ¶2 f ù
+ê (x, y ) + ab ( x, y ) + (x, y )ú + !
ë 2! ¶ x ¶x¶y 2! ¶ y
2 2
û
1 n æ n ö n- j j ¶ n f
+ å çç ÷÷a b n- j
( x, y )
n! j =0 è j ø ¶x ¶y j

§ Can replace x (or y) by t (or any other variable)


4
MEC3456/MAE3456

END

5
Errors in numerical computation
MEC3456/MAE3456

Errors

§ There are 2 main types of errors that occur in computational


techniques:
– Round-off Errors (errors resulting from hardware)
– Truncation Errors (resulting from algorithm)

§ Can also get “overflow” or “underflow” errors – i.e. numbers that are
too big/small to be represented in a computer
– These are usually due to programming error, and not usually
fundamental to numerical methods

§ What happens to errors as computation proceeds


– influences Stability
7
MEC3456/MAE3456

Round-off error

§ 2 Basic types of numeric data:


– INTEGERS (fixed point numbers), stored exactly:
– e.g. 1, -43, 863

– Real (or floating point) numbers:


– e.g. 0.0, 1.453E-005, -43.3333.

§ We mostly work with real numbers.


§ These are stored and manipulated in scientific notation,
– e.g., 4132.561 = 4.132561 x103
8
MEC3456/MAE3456

IEEE 754-2008 32-bit Floating point number representation

sign exponent (8 bits) mantissa (fraction) (23 bits)

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
31 30 23 22 0

§ Sign bit – 1 = “-”, 0 = “+”


§ 8 exponent bits (signed integer, -128 to 127)
– Or unsigned integer with offset of -127
§ 23 bits for mantissa
– Binary number of the form 1.b22b21b20 ... b2b1b0
– Where bi is the multiplier of (1/2)(23-i)

9
MEC3456/MAE3456

IEEE 754-2008 32-bit Floating point number representation

§ 30.5 = 16 × 1.90625 = 24 × 1.90625


– 0.9062510 = 0.111012
sign exponent (8 bits) mantissa (fraction) (23 bits)

0 0 0 0 0 0 1 0 0 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
31 30 23 22 0

§ 0.0068 = (1/256) × 1.7408 = 2-8 × 1.7408


– 0.740810 = 0.101111011010010100010002
sign exponent (8 bits) mantissa (fraction) (23 bits)

0 1 0 0 0 1 0 0 0 1 0 1 1 1 1 0 1 1 0 1 0 0 1 0 1 0 0 0 1 0 0 0
31 30 23 22 0
10
MEC3456/MAE3456

Round-off error

§ 32 bit reals have


– approx. 7 or 8 significant decimal figures
– A range between 10-40 – 10+40

§ The first source of round-off error is caused by the finite number of


bits in the mantissa. Often we cannot store ALL of the bits
– Mypi = 3.141592653589793238462643;
– fprintf('%30.25f\n',Mypi)
– 3.1415926535897931000000000
– underlined digits are round-off error

11
MEC3456/MAE3456

Round-off error

§ Additionally, round-off error, εm, is introduced every time an arithmetic


operation is performed (+, -, ×, ÷, exponentiation, functions ...)

§ The typical relative error is called the machine accuracy (εm).


– εm= (actual error) / (magnitude of the number)

§ The machine accuracy for a 32 bit machine is typically :


εm= 3 x10-8

§ Each arithmetic operation will introduce an error that is of size O(εm)

12
MEC3456/MAE3456

Round-off error

§ It is possible that these errors could all be of the same sign, in which
case after N operations the error estimate is:

Total error ≈ N εm

§ However, usually, the errors add up randomly and after N operations


the error is typically approximately

Total error ≈ N1/2 εm


13
MEC3456/MAE3456

Round-off error

§ Some arithmetic operations will result in significant loss of accuracy.

§ For example, the adding 0.0010 to a large number 4000.0

• For a hypothetical computer 0.4000 ×104


with 4-digit mantissa 0.0000001×104
• The final result is 0.4000 104
0.4000001×104

14
MEC3456/MAE3456

Round-off error

- b ± b2 - 4ac
§ For example, finding the roots of a quadratic: x=
2a
§ This will produce big errors if b2 >> 4ac,
– e.g. b2 = 1010, 4ac = 1
– Writing in scientific notation, with same exponent
– b2 = 1.0e10, 4ac = 0.0000000001e10 => b2 - 4ac=1.0e10

§ The “4ac” component partially (or totally) disappears when


“b2 - 4ac” is evaluated

15
MEC3456/MAE3456

Truncation error

§ TRUNCATION ERROR arises from the ALGORITHM we use.

§ Typically we approximate a continuous problem with a discrete method.


– e.g. the Taylor series is ¶y 1 2 ¶ 2 y 1 3 ¶ 3 y 1 4 ¶ 4 y
y ( x + d ) = y ( x) + d + d + d + d +!
¶x 2! ¶x 2 3! ¶x3 4! ¶x 4
§ If we use more terms, the approximation becomes increasingly accurate.
§ In practice we “truncate” the approximation, e.g.
¶y 1 2 ¶ 2 y 1 3 ¶ y 1 4 ¶ y
3 4
y ( x + d ) » y ( x) + d + d + d + d +!
¶x 2! ¶x 2 3! ¶x 3 4! ¶x 4

§ The circled term is the truncation error 16


MEC3456/MAE3456

Truncation error

§ Consider the Taylor series expansion of ln(1+ x)


(−1)i +1 i
y(x) = ln(1 + x) = ∑ x
i i
1 1 1 1 1 y! (0.5) = 0.40104167
= x − x 2 + x 3 − x 4 + x 5 − x 6 + ...
2 3 4 5 6 y(0.5) = ln(1+ 0.5) = 0.405465108

y (x)

The TRUNCATION error is true - approximation


= 0.405465108 – 0.40104167
= -0.004423441 (1% relative error)
17
MEC3456/MAE3456

Stability

§ Another important issue is STABILITY.


– Related to the interaction between round-off error, truncation error and the
numerical algorithm.
§ Round-off ALWAYS introduces error in a numerical method
§ As the method progresses, the error induced at one step will either be
reduced or magnified at the next step
– If the error at one step is reduced at the next step the method is usually STABLE
– If it is magnified it is UNSTABLE and eventually the error swamps the solution
and the approximation is meaningless.
§ We will later look at ways to determine the stability properties of a method.

18
MEC3456/MAE3456

Overflow and underflow errors

§ Overflow and/or underflow errors result from the fact that there is a
maximum and minimum exponent that can be represented
– (approx 10-40 – 10+40 for single precision, 32 bit)
– e.g. if real_max is largest real number that can be represented, what is
100*real_max?

§ They are often associated with other errors (e.g. instability)

§ Also, divide by zero (1/0.0 = NaN)


19
MEC3456/MAE3456

Summary

§ Key points from this recording:


– Errors are important
– Difference between round-off and truncation
– Overflow/underflow
– Stability

§ Next recording
– MATLAB overview

20
Matrix equations and their solution
Brief recap of direct methods
MEC3456/MAE3456

Why linear algebra? Real world example

§ Understanding and visualizing


3D volumetric data

§ 3D, multi-valued array that must


be interrogated, filtered,
decomposed
– All involve matrix manipulation
and solution of linear equations

22
MEC3456/MAE3456

Direct methods of solving Ax = b


§ Calculating the inverse of A (i.e. A-1)
−1 −1
§ Multiply entire equation by A-1 A Ax = A b
−1
Ix = A b
§ Direct inversion is expensive −1
x = A b

23
MEC3456/MAE3456

Direct methods of solving Ax = b


§ Gaussian elimination - Covered in ENG1060

§ Very expensive
– For n unknowns (n x n matrix), requires approx. 2n3/3 operations.
– For n =10 million (typical problem), requires approx 7 x 1020 operations
– On a fast PC (30 GigaFlops), this is 2 x 1010 sec. or ≈ 700 years
– Even on the worlds fastest computer, over 220 hours.

§ Review of GE is covered in Lecture-03 notes on Moodle and in Review-


03-GaussianElimination.pdf.
– You might need to code this in a lab
24
MEC3456/MAE3456

Tridiagonal form Ax = d
é b1 c1 0 ù é x1 ù é d1 ù
êa b c ú ê x ú êd ú
ê 2 2 2 úê 2 ú ê 2 ú
§ Often, systems arise which have the form ê a3 b3 • úê • ú = ê • ú
ê úê ú ê ú
ê • • cn -1 ú ê • ú ê•ú
§ This is called a 'tridiagonal' system êë 0 an bn úû êë xn úû êëd n úû
§ In this case, the efficient 'Thomas algorithm’ can be used (O(n))
– This is Gaussian elimination, but only have to eliminate the a’s
– The back substuitution only involves on b and one c

§ As the system is tridiagonal, the elements of the matrix A may be efficiently


stored as three vectors - {a, b, c}.
25
MEC3456/MAE3456

Thomas tri-diagonal algorithm

– For an n x n matrix, the Thomas algorithm requires of O(n) operations.

– For n=10 million, elements would require roughly 80 million operations to


find the inverse.
– On a very fast PC (30 GigaFlops), this would take roughly 0.002
seconds.
– Compare this with Gaussian Elimination!

– There is the added benefit that you only have to store the numbers on
the tri-diagonal

26
MEC3456/MAE3456

END

27
Iterative methods – the Jacobi method
MEC3456/MAE3456

Iterative solutions for Ax = b


a11x1 + a12 x2 + a13 x3 + a14 x4 = b1
§ Consider the 4x4 problem:
a21x1 + a22 x2 + a23 x3 + a24 x4 = b2
a31x1 + a32 x2 + a33 x3 + a34 x4 = b3
a41x1 + a42 x2 + a43 x3 + a44 x4 = b4

é a11 a12 a13 a14 ù é x1 ù é b1 ù


êa a24 úú êê x2 úú êêb2 úú
§ This can be rewritten in ê 21 a22 a23
=
matrix form as: êa31 a32 a33 a34 ú ê x3 ú êb3 ú
ê úê ú ê ú
ëa41 a42 a43 a44 û ë x4 û ëb4 û
29
MEC3456/MAE3456

The Jacobi method

a11x1 + a12 x2 + a13 x3 + a14 x4 = b1


§ Our simple problem: a21x1 + a22 x2 + a23 x3 + a24 x4 = b2
a31x1 + a32 x2 + a33 x3 + a34 x4 = b3
a41x1 + a42 x2 + a43 x3 + a44 x4 = b4

§ Solve each equation for its x1 =


1
[b1 − a12 x2 − a13 x3 − a14 x4 ]
unknown a11
Notice that the
1
x2 = [b2 − a21 x1 − a23 x3 − a24 x4 ] unknowns appear
a22
on both the LHS
1
x3 = [b3 − a31 x1 − a32 x2 − a34 x4 ] and the RHS of the
a33
equation.
1
x4 =
a44
[b4 − a41 x1 − a42 x2 − a43 x3 ] 30
MEC3456/MAE3456

The Jacobi method

§ Introduce the “iteration” number – m and label the initial guess for each xi as
xi0 (and the next estimates xi1, xi2, xi3, ... xim , etc.)

§ Find an updated estimate (xi1) using: 1 "


x11 = #b1 − a12 x2 − a13 x3 − a14 x4 $%
0 0 0

a11
1 " 0$
x12 = b − a
# 2 21 1 x 0
− a x
23 3
0
− a x
24 4 %
a22
1 "
x31 = #b3 − a31 x1 − a32 x2 − a34 x4 $%
0 0 0

a33
1 " 0$
x14 = b − a x
# 4 41 1
0
− a x
42 2
0
− a x
43 3 %
a44

31
MEC3456/MAE3456

The Jacobi method

§ Introduce the “iteration” number – m and label the initial guess for each xi as
xi0. (and the next estimates xi1, xi2, xi3, ... xim , etc.)

§ Find an updated estimate (xi1) using: 1 " 1$


x12 = b − a x 1
− a x 1
− a x
a11 # 1 12 2 13 3 14 4 %
1 " 1$
§ Find an updated estimate (xi2) using: x22 = b − a x 1
− a x 1
− a x
a22 # 2 21 1 23 3 24 4 %
– NOTE: the “2” is NOT a power, it is
the iteration number superscript 1 " 1$
x32 = b − a x 1
− a x 1
− a x
a33 # 3 31 1 32 2 34 4 %
1 " 1$
x42 = b − a x 1
− a x 1
− a x
a44 # 4 41 1 42 2 43 3 %

32
MEC3456/MAE3456

The Jacobi method

§ Introduce the “iteration” number – m and label the initial guess for each xi as
xi0. (and the next estimates xi1, xi2, xi3, ... xim , etc.)

§ Find an updated estimate (xi1) using: 1 " m$


x1m+1 = b − a x m
− a x m
− a x
a11 # 1 12 2 13 3 14 4 %

§ Find an updated estimate (xi2) using: 1 " m$


x2m+1 = b − a x m
− a x m
− a x
a22 # 2 21 1 23 3 24 4 %
– NOTE: the “2” is NOT a power, it is
the iteration number superscript 1 " m$
x3m+1 = b − a x m
− a x m
− a x
a33 # 3 31 1 32 2 34 4 %

§ The (m+1)th iteration is 1 " m$


x4m+1 = b − a x m
− a x m
− a x
a44 # 4 41 1 42 2 43 3 %

33
MEC3456/MAE3456

The Jacobi method

§ In general, if we have “n” equations, we write:

1# &
i−1 n
(
xi
m+1) m m
= %bi − ∑ aij x j − ∑ aij x j (
aii %$ j=1 j=i+1 ('

§ What are sensible values for the initial guesses, xi0?

§ For some problems, we known values at the boundaries. Use these and
xi0 = 0 for interior points.

§ For time dependent problems we could use the value at the previous timestep

§ Otherwise, make a sensible guess ….


34
MEC3456/MAE3456

The Jacobi method – matrix form

§ Split the matrix A into A = D + L + U where


! 0 $ !
#
a11 0 0
&
! 0 0 0 0 $ 0 a12 a13 a14 $
# & # &
# 0 a22 0 0 & # a21 0 0 0 & # 0 0 a23 a24 &
D =# & L =# & U =# &
# 0 0 a33 0 & # a31 a32 0 0 & # 0 0 0 a34 &
# & # a41 a42 a43 0 &% # &
#" 0 0 0 a44 &% " " 0 0 0 0 %

1# &
i−1 n
( m+1) m m
§ Then the Jacobi iteration xi = %bi − ∑ aij x j − ∑ aij x j ( can be written in
aii %$ ('
matrix form as j=1 j=i+1

x m+1 = D−1 "#b − Lx m − Ux m $%


35
MEC3456/MAE3456

Jacobi method

§ The Jacobi method allows us to solve matrix problems without determining


the inverse of a large matrix.

§ Usually, determining the inverse of a matrix is the most costly part of a


computation (it takes the longest time).

§ If we have a sufficiently large matrix problem, it will be faster to use the


Jacobi method.
– However, if the matrix is big enough, the Jacobi method will not converge on an
answer

36
MEC3456/MAE3456

END

37
Iterative methods – the Gauss-Seidel method
MEC3456/MAE3456

The Gauss-Seidel method

§ With Jacobi, we always use the “old” values (xim) on the RHS.
§ With Gauss-Seidel, use the updated values as soon as they are
computed (i.e. we use xim+1 values where possible).
1 " m$
x1m+1 = b − a x m
− a x m
− a x
a11 # 1 12 2 13 3 14 4 %

1 " m$
x2m+1 = b − a x m+1
− a x m
− a x
a22 # 2 21 1 23 3 24 4 %

1 " m$
x3m+1 = b − a x m+1
− a x m+1
− a x
a33 # 3 31 1 32 2 34 4 %

1 " m+1 $
x4m+1 = b − a x m+1
− a x m+1
− a x
a44 # 4 41 1 42 2 43 3 %
39
MEC3456/MAE3456

The Gauss-Seidel method

§ In general, for n equations, we can write the Gauss-Seidel iteration as:

1 # i−1 n &
xi( ) = %bi − ∑ aij x m+1
m+1 m
j
− ∑ a x
ij j
(
aii %$ j=1 j=i+1 ('

New estimates Old estimates

40
MEC3456/MAE3456

Iterative methods – when do we stop?

§ The iteration is repeated until our solution stops changing.

xin+1 − xin
ε a,i = n+1
< εs
xi

§ Sufficient condition for convergence of these two iterative methods is


diagonal dominance of the matrix, i.e.
n
| aii | ≥ ∑| aij |
j=1
j≠i
41
MEC3456/MAE3456

The Gauss-Seidel method

§ The G-S method allows us to solve matrix problems without


determining the inverse of a large matrix.

– G-S is faster than the Jacobi method

– G-S method can be more unstable than the Jacobi

42
MEC3456/MAE3456

END

43
Iterative methods
Successive Over Relaxation (SOR)
MEC3456/MAE3456

Successive Over Relaxation (SOR)

1 # i−1 n &
§ Start with Gauss-Seidel xim+1 = %bi − ∑ aij x m+1 − a x m
∑ ij j ( i i( + x m
− x m
j
aii %$ j=1 j=i+1 '
)+ # i−1 n & -
1 m+
xim+1 = xim m+1 m
+ * %bi − ∑ aij x j − ∑ aij x j ( − xi .
+, aii %$ j=1 j=i+1 (' +/

§ Introduce the relaxation parameter ω


)+ # i−1 n & -+
1
xim+1 = xim + ω * %bi − ∑ aij x m+1
j
− ∑ a x
ij j
m
( − x m
i .
+, aii %$ j=1 j=i+1 (' +/

45
MEC3456/MAE3456

Successive Over Relaxation (SOR)

1# &
i−1 n
§ This is equivalent to calculating x!im+1 m+1 m
= %bi − ∑ aij x j − ∑ aij x j ( (i.e. GS)
aii %$ j=1 j=i+1 ('

§ And then calculating xim+1 = ω x!im+1 + (1− ω ) xim where 0 ≤ ω ≤ 2

§ If ω = 1, then we recover Gauss-Seidel


§ If ω < 1, we have under-relaxation
– Good to try when Gauss-Seidel fails to converge or diverges
§ If ω > 1, we have over-relaxation
– Extra weight put on latest iteration. Often converges must faster
§ There is an optimum ω – value depends on the problem
46
MEC3456/MAE3456

Successive Over Relaxation (SOR)

– SOR converges quickly for an optimal ωopt where 1 < ωopt < 2

– The optimal value can be found by trial and error or iterative


improvement.

– The optimal relaxation factor can be related to the spectral radius (λmax)
of the Gauss-Seidel iteration matrix.
ω = 2
opt

1+ 1− λ max

47
MEC3456/MAE3456

Successive Over Relaxation (SOR)

§ An approximation to the spectral radius can be found during the iteration


process by the POWER METHOD. " %
(n+2)
" %
(n+1)
x x
$ 1 ' $ 1 '
$ x2 ' $ x '
$ ' −$ 2 '
$  ' $  '
x
$# m '& $# xm '&
– In practice, one can use
λmax ≈ (n+1) (n)
" x1 % " x1 %
$ ' $ '
– Here | . | is a measure of the “size” $ x2 ' $
−$
x2 '
$ ' '
of the vector. It is called a NORM. $  ' $  '
$# xm '& $# xm '&

– The most commonly used norm is


n
the L2 norm which is given by x n +1
-x = n
å (x
i =1
n +1
i -xin ) 2
48
MEC3456/MAE3456

Successive Over Relaxation (SOR)

§ A possible strategy might be:

– Do 2-3 iterations of the G-S method (i.e. ω = 1).

– Use these to estimate the spectral radius.

– Use the spectral radius to estimate ω opt.

– Continue using this new relaxation parameter.

49
MEC3456/MAE3456

Iterative solutions - Summary

§ Iterative methods have the following benefits:


– They only require us to store the non-zero elements of a coefficient
matrix ( [A] ).
§ This can significantly reduce the memory requirements of a problem

– They can be faster than determining the inverse of a very large matrix.

§ Iterative methods may have the following problems:


– They may diverge
– They may take a significant time to converge to an acceptable answer.
50
MEC3456/MAE3456

END

51
Root finding
MEC3456/MAE3456

Why/when root finding?

§ Example: Artillery
§ How to define the mathematical
problem?
– Position of target = rT
– Position shell = rS(t,θ)
t is time, θ is inclination of barrel
§ Problem is, when is rT = rS(t,θ)
– i.e.
d(t,θ ) = (rT − rS (t,θ )) ⋅ (rT − rS (t,θ )) = 0
53
MEC3456/MAE3456

Fundamentals – when have we “found” the root numerically?

§ For a root to be exact, we require


– y = f(xR) = 0
§ In practice, this is usually impossible to achieve
§ We accept the approximation
– |f(x)| < e, where e is a small tolerance (that we choose)
|f(x+)| = e
x-
y=0
x+
|f(x-)| = e
|f(xR)| = 0

§ Any guess within this region will be classified as a root because |f(x)| < e
54
MEC3456/MAE3456

Root finding – Types of root

§ Two basic types of roots


– Single
– multiple
2
§ Consider f ( x ) = x ( x −1) ( x − 2) ( x − 3)
§ Single roots
§ Double root
§ The multiplicity of a root for other
functions is determined by the Taylor
series expansion about the root
55
MEC3456/MAE3456

Root finding

§ For functions of one variable, there are two classes of methods


– Bracketed methods
– Open methods

§ You learnt this in ENG1060


– I will briefly review it here

56
MEC3456/MAE3456

END

57
Root finding – Bracketed methods
MEC3456/MAE3456

Root finding : Bracketing Methods

§ Bracketing methods assume we KNOW an


interval in which the function changes sign

§ Thus we require
– a lower bound xl
– an upper bound xu

§ Because the function changes sign in the interval


[xl, xu], there MUST be a root in between xl xu
§ There might be more than one however

59
MEC3456/MAE3456

Root finding : Bracketing Methods

1. Make a guess for the root based on the bounds

xr = f ( xl , xu )
2. If |f (xr)| < ε,
– we have found the root.
Else
– “choose” a new interval
– i.e. replace either xl or xu by xr
xl xr xu
3. Repeat steps 1 and 2
– Until |f (xr)| < ε

60
MEC3456/MAE3456

Root finding : Bracketing Methods

Choosing the new interval

§ If f (xl) and f (xr) have the same sign:


f (xl ) × f (xr ) > 0
– The root must lie between xr and xu
– We set the “new” xl = xr

xl xr xu

61
MEC3456/MAE3456

Root finding : Bracketing Methods

Choosing the new interval

§ If f (xl) and f (xr) have the same sign:


f (xl ) × f (xr ) > 0
– The root must lie between xr and xu
– We set the “new” xl = xr

xl xr xu
§ If f (xl) and f (xr) have different signs:
f (xl ) × f (xr ) < 0
– The root must lie between xl and xr
– We set the “new” xu = xr 62
MEC3456/MAE3456

Root finding: Bracketing Methods

§ Only thing left to do is determine the next guess - Two basic choices
§ Bisection § False position
– New guess is the midpoint – x-crossing of chord between bounds
xl + xu f (xu ) xl − f (xl ) xu
xr = xr =
2 f (xu ) − f (xl )
(xu , f(xu))

xr

(xl , f(xl))

xl xr xu
63
MEC3456/MAE3456

END

64
Root finding – Open methods
MEC3456/MAE3456

Root finding: Open methods

§ If we have no idea where a root might be, one possibility is to


use Open methods.

§ Open methods also (usually) converge faster


– But can also diverge

§ You know THREE different methods


– Newton-Raphson
– Secant
– Modified Secant

§ Algorithmically they are IDENTICAL (only one formula is changed)


66
MEC3456/MAE3456

Root finding : Open methods

§ Basic open method concept


– Start with a guess xi
1. Calculate f (xi) and slope mi (=f’(xi)) xi+1 xi+2
f (xi ) f (xi+2)
2. Estimate xi+1 using xi+1 = xi −
mi
f (xi+1) , mi+1
3. Is f (xi+1) close enough to zero?
– If it is – We have found the root
– Else repeat 1-3

§ As with bracketed methods,


need to decide what is close enough f (xi), mi
– Use | f (xi+1) | < precision xi
(where we choose precision) 67
MEC3456/MAE3456

Root finding : Newton-Raphson method

§ If we have an analytic expression for the


derivative, f ’(xi), we can use it for mi

§ Thus writing mi = f ’(xi), we rewrite f (xi+1)


f (xi ) f (xi)
xi+1 = xi −
mi

as f (xi ) xi xi+1
xi+1 = xi −
f "(xi )

§ This is the Newton-Raphson method


68
MEC3456/MAE3456

Root finding : The Secant method

§ If we don’t have an analytic expression


for the derivative:
– Imagine we have two points, our current
guess xi and our previous guess xi-1
– Calculate their function values f (xi)
f (xi) and f (xi-1)
– Approximate the derivative as f (xi-1)
the slope of the line passing through
these two points
§ Easy to show slope is xi-1 xi
mi ≈ (f (xi) - f (xi-1)) / (xi – xi-1)

69
MEC3456/MAE3456

Root finding : The Secant method

§ Replacing mi by (f (xi) - f (xi-1)) / (xi – xi-1)


in the open method formula
f (xi )
xi+1 = xi −
mi
f (xi)

§ Gives the Secant method formula f (xi-1)

xi+1 =x −
( x − x ) f (x )
i i−1 i
i xi-1 xi xi+1
( f (x ) − f (x ))
i i−1

70
MEC3456/MAE3456

Root finding : Variation - The Modified Secant method

§ We only need ONE point to start the


modified secant method, xi and its δ
function value f (xi)

§ We then choose a small increment to


f (xi+δ)
xi (call it δ) at which we also calculate
a function value, i.e. f (xi+δ)
f (xi)
§ Approximate the derivative as the slope
of the line passing through these 2 points
xi xi+δ
§ The slope is (f (xi+δ) - f (xi)) / δ

71
MEC3456/MAE3456

Root finding : The Modified Secant method

§ Replacing mi by ( f (xi+δ) - f (xi) ) / δ


in the open method formula
f (xi )
xi+1 = xi −
mi
f (xi+δ)

§ Gives the modified secant method f (xi)


formula
d f ( xi )
xi +1 = xi -
( f ( xi + d ) - f ( xi ) ) xi xi+δ xi+1

72
MEC3456/MAE3456
Open method root finding : KEY IDEA

§ Estimate the slope of the function,


mi, and find the tangent crossing
using: xi+1
f (xi )
xi+1 = xi −
mi

§ The three methods just use


different approaches to estimate
the slope mi
f (xi), mi
xi
73
MEC3456/MAE3456

Root finding : Problems with open methods

§ Problems – limit cycles, or diverges .....

x3 x1
x0 x2

74
MEC3456/MAE3456

Root finding : Problems with open methods

§ Problems – poor convergence

x5 x0 x2 x4 x1 x3

75
MEC3456/MAE3456

Root finding : Problems with open methods

§ Problems – To infinity and beyond ......

x0

76
MEC3456/MAE3456

END

77
Root finding: systems of equations
MEC3456/MAE3456

Root finding: Systems of equations

§ An important problem is to locate the root(s) of a set of n simultaneous (non-


linear) equations. ! f1 (x1 , x2 ,......, xn ) = 0
# f (x , x ,......, x ) = 0
# 2 1 2 n
#
" . .
# . .
#
#$ fn (x1 , x2 ,......, xn ) = 0

– Generally, each of the fi is an (n-1)-dimensional hyper-surface embedded in an n-


dimensional space
– Values of xi where all equations are satisfied (if a solution exists) is a point (or set
of points)
79
MEC3456/MAE3456

Systems of equations: 2D example

§ In 2D, consider when a parabola ( y=ax2 + bx + c=0 )


intersects a circle ( (x-d)2+(y-e)2-r2=0 )

§ Example let
– a=2
– b=0
– c=-3
– d=0
– e=0
– r=1.5

§ For these values, 4 points of crossing 80


MEC3456/MAE3456

Systems of equations: 2D example

! f1 (x1 , x2 ,......, xn ) = 0
# f (x , x ,......, x ) = 0
§ In terms of the # 2 1 2 n
#
general system " . .
# . .
#
#$ fn (x1 , x2 ,......, xn ) = 0

§ Our functions are


– f1(x,y) = y - ax2 - bx - c
– f2 (x,y) = (x - d)2 + (y - e)2 - r2

§ To simplify write
f ( x, y ) = 0
– f1(x,y) = f(x,y)
– f2(x,y) = g(x,y) g ( x, y ) = 0
81
MEC3456/MAE3456

Generalised Newton-Rapshon for two equations

§ So, we have two functions f(x,y), g(x,y).


§ Each may be written (using 2-D Taylors series about a point (xi,yi) ):
¶f ¶f
f ( xi + Dx, yi + Dy ) = f ( xi , yi ) + Dx + Dy + O(Dx 2 , Dy 2 )
¶x ¶y
¶g ¶g
g ( xi + Dx, yi + Dy ) = g ( xi , yi ) + Dx + Dy + O(Dx 2 , Dy 2 )
¶x ¶y
– Or
∂f ∂f
f ( xi + Δx, yi + Δy) ≅ f ( xi , yi ) + Δx + Δy
∂x ∂y
∂g ∂g
g ( xi + Δx, yi + Δy) ≅ g ( xi , yi ) + Δx + Δy
∂x ∂y
– We aim to find Dx and Dy where f(xi+Dx,yi+Dy)=0 and g(xi+Dx,yi+Dy)=0
82
MEC3456/MAE3456

Generalised Newton-Rapshon for two equations

§ Letting f(xi+Dx,yi+Dy)=0 and g(xi+Dx,yi+Dy)=0 gives us


∂f ∂f
0 ≅ f ( xi , yi ) + Δx + Δy
∂x ∂y
∂g ∂g
0 ≅ g ( xi , yi ) + Δx + Δy
∂x ∂y

§ And the generalised Newton-Raphson method (in 2D) is written


" ∂fi ∂fi %
$ '
$ ∂x ∂y '" Δx % " − f ( xi , yi ) %
$ '$ '=$ ' i is the iteration number
∂gi ∂gi $# Δy '& $ −g ( xi , yi ) '
$ ' # &
$# ∂x ∂y '&
83
MEC3456/MAE3456
Generalised Newton-Raphson for two equations

" ∂fi ∂fi % " %


−1

$ ' ∂fi ∂fi


$ ∂x ∂y '" Δx % " − f ( xi , yi ) %
" Δx % $
$ ' " −f x,y
( i i) %
$ '=$ ' OR ∂x ∂y ' $ '
$ ' $ '=$
∂gi ∂gi $# Δy '& $ −g ( xi , yi ) '
$# Δy '& ∂gi ∂gi ' $ −g ( x , y ) '
$ ' # & $ ' # i i &
$# ∂x ∂y '& $# ∂x ∂y '&

§ The Process is iterative…


– Obtain new values for Δx and Δy
– Update the values xi+1 and yi+1, i.e. xi+1=xi+Δx, yi+1=yi+Δy,
– Re-evaluate the function values and the derivatives
– Repeat until | f(xi+1, yi+1) | and | g(xi+1, yi+1) | are < tolerance
§ this coincides with Δx and Δy being very small
– On convergence, we have approximated where the two functions are equal.
84
MEC3456/MAE3456

Generalised Newton-Rapshon

§ Drawbacks…
– Have to write function files for all derivatives as well as functions.
– For 10 functions this is 110 function files required!

– Generally have to be reasonably close to solution

§ Modifications…
– Use the equivalent of the secant method to avoid derivatives.
– There are various Public Domain routines available to do this
§ see Numerical Recipes.
85
MEC3456/MAE3456

Summary

§ Key points from root finding recordings:


– Bracketed methods of root finding
§ Share algorithmic similarity
– Open methods methods of root finding
§ Based on finding intersection of the tangent with the x-axis
§ Share algorithmic similarity
– Multi-dimensional root finding
§ Need to use an “open” method
§ Problem becomes matrix inversion
§ Next recordings:
– Interpolation in one dimension
86
MEC3456/MAE3456

END

87

You might also like