Letmatrix2 Interative Methods
Letmatrix2 Interative Methods
2
Introduction
Iterative Methods
Direct Method
LU Decomposition
Domain Decomposition
General and Robust but
can be complicated if Preconditioning
N>= 1M
Conjugate Gradient
Jacobi
GMRES
Gauss-Seidel Multigrid
4
Introduction: Gershgorin Circle Theorem
5
Iterative Methods
Stationary:
x(k+1) =Gx(k)+c
where G and c do not depend on iteration
count (k)
Non Stationary:
x(k+1) =x(k)+akp(k)
where computation involves information
that change at each iteration
6
Stationary: Jacobi Method
In the i-th equation solve for the value of xi while assuming the
other entries of x remain fixed:
mij x j mij x j
( k 1)
N
bi bi
mij x j bi xi
j 1
j i
mii
xi
(k )
j i
mii
mij x j mij x j
(k ) ( k 1)
bi
(k ) (k ) ( k 1) (k ) j i j i
xi w x i (1 w) xi xi
mii
x ( k ) D wL ( wU (1 w) D ) x k 1 w( D wL) 1 b
1
9
SOR
Choose w to accelerate the convergence
(k ) (k ) ( k 1)
xi w x i (1 w) xi
W =1 : Jacobi / Gauss-Seidel
2>W>1: Over-Relaxation
W < 1: Under-Relaxation
Convergence of Stationary Method
Linear Equation: MX=b
A sufficient condition for convergence of the
solution(GS,Jacob) is that the matrix M is diagonally
dominant. N
mii m
j 1& j i
i, j
Jacobi: G D 1 L U
Gauss-Seidel G ( D L) 1U
G D wL ( wU (1 w) D )
1
SOR:
Convergence of Gauss-Seidel
Eigenvalues of G=(D-L)-1LT is inside a unit
circle
Proof:
G1=D1/2GD-1/2=(I-L1)-1L1T, L1=D-1/2LD-1/2
Let G1x=rx we have
L1Tx=r(I-L1)x
xL1Tx=r(1-xTL1x)
y=r(1-y)
r= y/(1-y), |r|<= 1 iff Re(y) <= ½.
Since A=D-L-LT is PD, D-1/2AD-1/2 is PD,
1-2xTL1x >= 0 or 1-2y>= 0, i.e. y<= ½.
Linear Equation: an optimization problem
Quadratic function of vector x
f ( x ) 12 x T Ax bT x c
15
Gradient of quadratic form
f ( p ) f ( x ) 12 ( p x )T A( p x )
since
1
2 ( p x )T A( p x ) 0
We have,
f ( p) f ( x) If p != x
If A is not positive definite
18
Non-stationary Iterative Method
State from initial guess x0, adjust it until
close enough to the exact solution
x( i 1) x( i ) a( i ) p( i ) i=0,1,2,3,……
f ( x( i ) ) b Ax( i ) r( i )
x( i 1) x( i ) a( i ) r( i )
Steepest Descent Method (2)
How to choose step size ?
Line Search
a(i ) should minimize f, along the direction
of r(i ) , which means dad f ( x( i 1) ) 0
d
da f ( x( i 1) ) f ( x( i 1) )T d
da x( i 1) f ( x( i 1) )T r( i ) 0
T
r( i 1) r( i ) 0 Orthogonal
(b Ax( i 1) )T r( i ) 0
(b A( x( i ) a( i ) r ( i ) ))T r( i ) 0
(b Ax( i ) )T r( i ) a( i ) ( Ar ( i ) )T r( i )
r( i )T r( i )
a (i ) r T
Ar( i )
(i )
Steepest Descent Algorithm
Given x0, iterate until residue is smaller than error tolerance
r( i ) b Ax( i )
r( i )T r( i )
a (i ) r T
Ar( i )
(i )
x( i 1) x( i ) a( i ) r( i )
Steepest Descent Method: example
3 2 x1 2
2 6 x 8
2
c) Intersection of surfaces.
23
Iterations of Steepest Descent Method
24
Convergence of Steepest Descent-1
let vk [0,0,0,,1,,0,0,0]T
k
n
Eigenvector: e( i ) j v j
j 1
EigenValue: j j=1,2,…,n
Energy norm: e A
( eT Ae)1 / 2
Convergence of Steepest Descent-2
2
e( i 1) e(Ti 1) Ae( i 1)
A
( e(Ti ) a( i ) r(Ti) ) A( e( i ) a( i ) r( i ) )
e(Ti ) Ae( i ) 2a( i ) r(Ti) Ae( i ) a(2i ) r(Ti) Ar( i )
2 r(Ti)r( i ) r(Ti)r( i ) 2 ( r(Ti)r( i ) ) 2
e( i ) 2 T
( r(Ti)r( i ) ) ( T
) 2 r(Ti) Ar( i ) e( i )
A r Ar( i )
(i ) r Ar( i )
(i )
A r(Ti) Ar( i )
2
T
( r( i )r( i ) ) 2
2
( j j)
2 2 2
e( i ) 1 T e( i ) 1 j
A
( j j )( j j )
T 2 3 2
( r( i ) Ar( i ) )( e( i ) Ae( i ) )
A
j j
( j22j ) 2
2 j
e( i ) 2
, 2
1
A ( j23j )( j2 j )
j j
Convergence Study (n=2)
assume 1 2
2
e( i ) j v j let u 2 / 1
j 1
(
2 2
2 2 )
2 2 2
2 1 2 1 1
(1 1 222 )(1213 2232 )
(k 2 u 2 )2
1
( k u 2 )( k 3 u 2 )
Plot of w
28
Case Study
29
Bound of Convergence
2 2 2
( k u )
1
2
( k u 2 )( k 3 u 2 )
4k 4
1 5
k 2k 4 k 3
( k 1) 2
( k 1) 2
k1
k 1
It can be proved that it is
also valid for n>2, where
k max / min
30