Numerical Analysis
Numerical Analysis
June 2, 2018
The round off error if the ’n+1’ th digit is removed, is < 1 × 10−n . It is < 0.5 × 10−n if the
’n’ th digit is rounded off.
In general, a nonzero real number x can be represented in the form x = ±r × 10n . where
1
10
≤ r < 1 and n is an integer.
similarly, in binary system, a nonzero real number x can be represented in the form x =
±q × 2m . where 12 ≤ q < 1 and m is an integer.
q is the mantissa and m is the exponent.
A real number expressed as in Equation () is said to be in normalized floating- floating point
form.
Example: Consider a 32 bit word.
2 sign bits (each one for mantissa and exponent) 7 bits for exponent. 23 bits for mantissa.
Since first digit is always 1, effectively 24 bits.
Maximum range is between 2−127 (10−38 ) to 2127 (1038 ) since |m| = (27 − 1) = 127
In this machine numbers have a limited precision of roughly seven decimal places, since the
least significant bit in the mantissa represents units of 2 24 (or approximately 10 7 ).
Machine overflow and under flow
x > 224 overflow
x < 2−24 underflow
Machine epsilon
1
For a number system and a rounding procedure, machine epsilon is the maximum relative
error of the chosen rounding procedure.
x−x∗
Fractional error x
≤ 2−24 .
x = x∗ (1 + δ)
epsilon = 1.0;
while (1.0 + 0.5 * epsilon) 1.0: epsilon = 0.5 * epsilon
Chopping: Store x as c, where |c| < |x| and no machine number lies between c and x.
Rounding : Store x as r, where r is the machine number closest to x. IEEE standard
arithmetic uses rounding .
2 Numerical methods
mathematically equivalent formulas are not numerically equivalent
Numerical method: A procedure used to solve a mathematical problem through the use
of numbers , an arithmetical approach. Approximate solutions to complex problems in terms
of simple arithmetic operations. A special care or procedure should be adopted if problem
is illconditioned or unstable. General strategyis to replace difficult problem by easier one
having same or closely related solution.
infinite by finite
differential by algebraic
nonlinear by linear
complicated by simple
2
input data error: systematic and random errors
During Computation
3 Big-O Notation
A function f(x) is order g(x) denoted as f (x) = O(g(x)) if,
In numerical analysis,
4 Approximation of derivatives
0 f 00 (x) 2
f (x + ∆x) = f (x) + f (x)∆x + ∆x + O(∆x3 )
2!
fj00 2 2
fj+1 = fj + fj0 h + h + O(h3 ) (1)
2!
fj+1 − fj
= fj0 + O(h)
h
truncation error is O(h). What this means?
3
backward difference approximation,
fj00 2 2
fj−1 = fj − fj0 h + h − O(h3 )
2!
fj − fj−1
fj0 = + O(h)
h
To get the expression for central difference, subtract equation (3) from (1), to get,
fj+1 − fj−1
fj0 = + O(h2 )
2h
If we refine the mesh by factor 2, we expect the truncation error to reduce by factor 4.
Here is fourth order formula, from function evaluation at additional points.
t ≥ t0 ; y(t0 ) = y0
4
`[2.0,2.0]
Analytic functions
Consistency:
differential equation ⇔ discretized equation
Definition: Truncation error should vanish as step size tend to zero.
Convergence:
Solution of the differential equation ⇔ numerical solution of the discretized equation
Definition: A numerical scheme is said to be convergent if it produces the exact solution of
the underlying PDE in the limit h → 0 and t → 0. Lax equivalence theorem: stability +
consistency = convergence
5
6.2 Euler’s methods
Z t
y(t) = y(t0 ) + f (y, τ )dτ ≈ y0 + f (y0 , t0 )(t − t0 )
t0
yn+1 ≈ yn + hf (yn , tn )
is the celebrated Euler’s method. It is the corner stone of numerical analysis of differential
quations.
implicit and explicit Euler’s methods
dy h2 d2 y
y(t + h) = y(t) + h + + O(h3 )
dt 2! dt2
Local error is O(h2 ) and global error is O(h), it is called a first order method.
dy h2 d2 y
y(t − h) = y(t) − h + − O(h3 )
dt 2! dt2
dy
y(t) = y(t − h) + h + O(h2 )
dt
ẏ(t) = f (y, t)
6
Assymtotic stability.
assymptotic stability
Limt → ∞ k y(t) − ye k= 0
d
(ye + y) = f (ye + y, t)
dt
d ∂f
y= |(y ) y + O(y 2 )
dt ∂y e
with f 0 (ye ) = λ,
dy
= λy
dt
where ∈ C is a system parameter which represent the eigenvalues of linear systems of
differential equations. The equation is stable if Real() ≤ 0. In this case the solution is
exponentially decaying. limt → ∞y(t) = 0).
Under what conditions the numerical solution yn also decays, limn → yn = 0 ?
Let us examine the explicit Euler’s method with f (y, t) replaced by λy.
yn+1 = yn + hλyn
yn+1 = yn (1 + λh)
In the complex plane ((1 + λr h)2 + (λi h)2 < 1), implies that the region of convergence is a
unit disc in the λh plane centered at -1.
7
Therefore, we can conclude that while the anlytical solution is stable for all values of λ in
the left half of complex plane, the numerical solution is stable in a much smaller region only.
This is known as conditional stability.
Let us examine the implicit Euler’s method with f (y, t) replaced by λy.
yn+1 = yn + hλyn+1
yn
yn+1 =
(1 − λh)
yn−1 y0
yn+1 = =
(1 − λh)2 (1 − λh)n+1
| 1 − λh |> 1
for λ < 0 this implies that the method is convergent for all λ > 0.
Or in the complex plane (λh) , it convergent outside the unit disc (1 − λr h)2 + (λi h)2 ≤ 1
centered at 1.
Implementation of implicit methods
F (x) = 0; (4)
rewriteF (x) = x − f (x)x = f (x) (5)
xn+1 = f (xn ) (6)
(7)
8
Fixed point iteration:
k is the iteration counter, i the integration step counter
k+1 k
yi+1 = yi + hf (ti+1 , yi+1 )
Newton iteration:
yi+1 = yi + hf (ti+1 , yi+1 )
this implies,
yi+1 − yi − hf (ti+1 , yi+1 ) = 0 i.e., F (yi+1 ) = 0
k+1 k
yi+1 = yi+1 + ∆yi+1
7.1 Semi-discretisation
9
∂u ∂ 2u
=α 2
∂x ∂x
∆x2 ∆x3
u(x + ∆x) = u(x) + u0 (x)∆x + u00 (x) + u000 (x) + O(∆x4 ) (9)
2 6
∆x2 ∆x3
u(x − ∆x) = u(x) − u0 (x)∆x + u00 (x) − u000 (x) + O(∆x4 ) (10)
2 6
Adding the above two eq.
du
= Au(t)
dt
α
where A = ∆x2
[1, −2, 1], is the tridiagonal matrix.
The solution in terms of the eigen values λi and eigen vectors vi of matrix A is,
X
u(t) = ci vi eλi t
i
Implicit method,
with h = ∆t,
10
α∆t
u(t + ∆t) = u(t) + Au(t)
∆x2
αh
u(t + h) = u(t) + Au(t + h)
∆x2
Ax = b
8 Linear Equations
Ax = b
AT (Ax̄ − b) = 0
x̄ is the best fit estimate (x̄ lies in the column space of A , which is subspace of Rrows .
proof :
BL A BR = BL I = I BR
11
LU x = b
9 Direct methods
Thomas tridiagonal matrix algorithm (TDMA)
Gaussian elimination algorithm
Rule of thumb: direct methods for 1D and 2D and iterative methods for 3D.
10 Iterative Methods
Performance better with round off errors and computationally efficient.
xk+1 = f (xk )
Ax = b
xk+1 = Lxk + b
x̄ = Lx̄ + b
rk+1 = Lrk
rk+1 = Lk r1
12
Lim k
r = 0 Lk = 0
k→∞
In case of a matrix operator, the spectral radius ρ(A) < 1.
where, ρ(L) = max |λ|
A = (D + L + U )
Ax = b
Dx = b − (L + U )x
xk+1 = D−1 [b − (L + U )xk ]
Dxk+1 = b − Lxk+1 − U xk
i−1 n
1 X X
xk+1
i = [bi − k+1
aij xj − aij xkj ]
aii j=1 j=i+1
xk+1 = f (xk )
i−1 n
w X X
xk+1
i = (1 − w)xki + [bi − aij xk+1
j − aij xkj ]
aii j=1 j=i+1
13
In matrix form,
Ax = b
A=D+L+U
(D + L + U )x = wb + (1 − w)b
(D + w(L + U )x = wb + (1 − w)Dx
11 Advanced Methods
14