0% found this document useful (0 votes)
2 views

Lecture Slides

The document provides an overview of numerical solutions for initial value problems (IVPs) in differential equations, including methods for reducing higher-order systems to first-order systems and the application of Duhamel's principle. It discusses various numerical methods such as Euler's method, multistep methods, and Runge-Kutta methods, along with concepts like Lipschitz continuity, well-posedness, and convergence. Additionally, it addresses the challenges of stiff equations and the importance of stability in numerical methods.

Uploaded by

Pablo Parra
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Lecture Slides

The document provides an overview of numerical solutions for initial value problems (IVPs) in differential equations, including methods for reducing higher-order systems to first-order systems and the application of Duhamel's principle. It discusses various numerical methods such as Euler's method, multistep methods, and Runge-Kutta methods, along with concepts like Lipschitz continuity, well-posedness, and convergence. Additionally, it addresses the challenges of stiff equations and the importance of stability in numerical methods.

Uploaded by

Pablo Parra
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 127

Lecture Slides

Per-Olof Persson
[email protected]

Department of Mathematics
University of California, Berkeley

Math 228A Numerical Solutions of Differential Equations


IVP Theory and Basic Numerical Methods
Reduction to First Order and Autonomous Systems

Consider the initial value problem

u0 = f (u, t), u(t0 ) = u0

In general,

u : [t0 , t0 + a] → Rn ,
f : Rn × [t0 , t0 + a] → Rn

An mth order system of s equations can be reduced to a system of


ms first order equations, by introducing new variables
v1 = u0 , v2 = u00 , . . . , vm−1 = u(m−1) .
The explicit time-dependency can be removed by introducing a
new variable w = t with the trivial equation w0 = 1, to obtain a
new autonomous system of equations

[u, w]0 = [f (u, w), 1], [u, w](t0 ) = [u0 , t0 ]


Linear ODEs

A linear system of ODEs has the form

u0 = A(t)u + g(t), A(t) ∈ Rs×s , g(t) ∈ Rs

If A does not depend on t, the system is a constant coefficient


linear system

u0 = Au + g(t)

for a constant matrix A ∈ Rs×s . For g(t) = 0 the equation is


homogeneous with solution u(t) = eA(t−t0 ) u0 where the matrix
exponential for diagonalizable A = V ΛV −1 is
 λt 
e 1 0 ··· 0
 0 e λ2 t · · · 0 
At Λt −1 Λt
e =Ve V , e = .
 
. . . .. 
 . . . 
0 0 · · · e λs t
Duhamel’s Principle

For general constant coefficient linear systems u0 = Au + g(t), the


solution is given by Duhamel’s principle
Z t
u(t) = eA(t−t0 ) u0 + eA(t−τ ) g(τ ) dτ
t0

This is similar to the concept of Green’s functions for boundary


value problems – The effect at time t of g(τ ) at time τ is
eA(t−τ ) g(τ ).
Lipschitz Condition

Definition
A function f (u, t) is said to be Lipschitz continuous in a region S if
there exists a Lipschitz constant λ ≥ 0 such that

kf (x, t) − f (y, t)k ≤ λkx − yk, x, y ∈ S

If f is smoothly differentiable in S we may set

∂f (x, t)
λ = max
(x,t)∈S ∂x

Definition
Commonly used vector norms

m m
!1/2
X X 2
kxk1 = |xi | , kxk2 = |xi | , kxk∞ = max |xi |
1≤i≤m
i=1 i=1
Well-Posedness

Definition
The initial-value problem

u0 = f (u, t), t0 ≤ t ≤ t0 + a, u(t0 ) = u0

is said to be a well-posed problem if:


A unique solution, u(t), to the problem exists, and
There exist constants ε0 > 0 and k > 0 such that for any ε,
with ε0 > ε > 0, whenever δ(t) is continuous with kδ(t)k < ε
for all t in [a, b], and when kδ0 k < ε, the initial-value problem

z 0 = f (z, t) + δ(t), t0 ≤ t ≤ t0 + a, z(t0 ) = u0 + δ0 ,

has a unique solution z(t) that satisfies

kz(t) − u(t)k < kε for all t in [t0 , t0 + a].


Existence and Uniqueness

Theorem
Suppose f (u, t) is continuous in u, t on the cylinder

S = {(u, t) : u ∈ Rd , ku − u0 k ≤ b, t ∈ [t0 , t0 + a]}

with a, b > 0 and k · k a given vector norm. Then the IVP

u0 = f (u, t), u(t0 ) = u0 (1)

has at least one solution for t0 ≤ t ≤ t0 + α, where


 
b
α = min a, , µ = sup kf (u, t)k
µ (u,t)∈S
Existence and Uniqueness

Theorem Picard-Lindelöf
Subject to both continuity and Lipschitz continuity of f in S, (1)
has a unique solution in [t0 , t0 + α] and is well-posed.

Theorem
If f is C r for some r ≥ 1, the solution u(u0 , t) is also C r as a
function of t and the initial condition u0 .
Basic Schemes for IVPs

Consider the initial value problem

u0 = f (t, u), 0 ≤ t ≤ T, u(0) = u0

Distribute mesh points equally throughout [0, T ]:

tn = nh, for each n = 0, 1, 2, . . . , N

The step size h = T /N = tn+1 − tn


Standard One-Step Methods

Forward Euler

un+1 = un + hf (tn , un )

with error bound at time tn = nh ≤ T

eLT − 1
kun − u(tn )k ≤ eLT ku0 − u(0)k + Mh
2L
where L is a Lipschitz constant for f and M ≥ ku00 k.
Backward Euler

un+1 = un + hf (tn+1 , un+1 )

Taylor series method of order k


h2 0 hk (k−1)
un+1 = un + hf (tn , un ) + f (tn , un ) + · · · + f (tn , un )
2 k!
Multistep Methods

Definition
An m-step multistep method for solving the initial-value problem

u0 = f (t, u), 0 ≤ t ≤ T, u(0) = u0 ,

has a difference equation for approximating un+1 :

un+1 = am−1 un + am−2 un−1 + · · · + a0 un+1−m


+ h[bm f (tn+1 , un+1 ) + bm−1 f (tn , un ) + · · ·
+ b0 f (tn+1−m , un+1−m )],

Explicit method if bm = 0, implicit method if bm 6= 0.


Multistep Methods

Fourth-Order Adams-Bashforth Technique

h
un+1 = un + [55f (tn , un ) − 59f (tn−1 , un−1 )
24
+ 37f (tn−2 , un−2 ) − 9f (tn−3 , un−3 )]

Fourth-Order Adams-Moulton Technique

h
un+1 = un + [9f (tn+1 , un+1 ) + 19f (tn , un )
24
− 5f (tn−1 , un−1 ) + f (tn−2 , un−2 )]
Runge-Kutta Methods

Runge-Kutta Order Four

k1 = f (tn , un )
 
h h
k2 = f tn + , un + k1
2 2
 
h h
k3 = f tn + , un + k2
2 2
k4 = f (tn + h, un + hk3 )
h
un+1 = un + (k1 + 2k2 + 2k3 + k4 )
6
Runge-Kutta Methods

General form of Runge-Kutta methods


 
s
X
ki = f tn + ci h, un + h aij kj  , i = 1, . . . , s
j=1
s
X
un+1 = un + h bj kj
j=1

Runge-Kutta tableaux or Butcher array


c A
bT
Explicit Runge-Kutta method if ai,j = 0 when j ≥ i
Explicit Runge-Kutta Methods

Forward/Backward Euler
0 0 1 1
1 1
The explicit midpoint method
0
1 1
2 2
0 1
Four-stage fourth-order method
0
1 1
2 2
1 1
2 0 2
1 0 0 1
1 1 1 1
6 3 3 6
Implicit Runge-Kutta Methods

Implicit midpoint method


1 1
2 2
1
Hammer-Hollingsworth
√ √
1 3 1 1 3
2 − √6 4√ 4 − 6
1 3 1 3 1
2 + 6 4 + 6 4
1 1
2 2
Convergence of Euler’s Method
Review

We consider the initial value problem (IVP)

y 0 = f (t, y), y(0) = y0 , 0≤t≤T

Lipschitz continuity in y of a function f (t, y)

kf (t, x) − f (t, y)k ≤ L · kx − yk, ∀x, y

If smoothly differentiable,
∂f (t, x)
L = max
x,t ∂x

(Forward) Euler’s method:

un+1 = un + h · f (tn , un )

with h = T /N and tn = nh
Convergence of Euler’s Method

Definition
A method for the IVP converges if, for Lipschitz continuous f

max kun − y(tn )k = 0


n=0,1,...,T /h

as h → 0 and u0 → y0 . It is accurate of order p if

max kun − y(tn )k = O(hp ) + O(ku0 − y0 k)


n=0,1,...,T /h

as h → 0 and u0 → y0 .

Theorem
Euler’s method converges for any IVP with f Lipschitz and y is C 2 .
Convergence of Euler’s Method

One-step error or Local Truncation Error (LTE)


The error if the true solution is inserted into the difference scheme:

y(tn+1 ) = y(tn ) + h · f (tn , y(tn )) + τn


or
τn = y(tn+1 ) − [y(tn ) + h · f (tn , y(tn ))]

Warning: LTE is often defined with a division by h.

Use Taylor expansion with remainder term on y(tn+1 )


(component-wise):
h2
τn = y(tn ) + hy 0 (tn ) + y 00 (tn + θn h) − [y(tn ) + hf (tn , y(tn ))]
2
h2 00
= y (tn + θn h)
2
which gives kτn k ≤ M h2 /2 with the bound ky 00 k ≤ M . The
scheme is consistent if kτn k → 0 as h → 0.
Convergence of Euler’s Method

Substract definition of LTE from the scheme:

un+1 = un + hf (tn , un )

− y(tn+1 ) = y(tn ) + hf (tn , y(tn )) + τn

un+1 − y(tn+1 ) = un − y(tn ) + h [f (tn , un ) − f (tn , y(tn ))] − τn

Bound the truncation errors by τ ≥ kτn k for all n, take norms and
use the triangle inequality:

ken+1 k ≤ ken k + hkf (tn , un ) − f (tn , y(tn ))k + τ

and use the Lipschitz continuity to get

ken+1 k ≤ ken k + hLken k + τ = (1 + hL)ken k + τ


Convergence of Euler’s Method

Apply recursively:

ken+1 k ≤ (1 + hL)ken k + τ ≤ (1 + hL)2 ken−1 k + (1 + (1 + hL))τ


(1 + hL)n+1 − 1
≤ · · · ≤ (1 + hL)n+1 ke0 k + τ
(1 + hL) − 1

Suppose e0 = 0 (exact initial conditions u0 = y0 ), and use


(1 + hL) ≤ ehL :

(1 + hL)n − 1 enhL − 1 eLT − 1


ken k ≤ τ≤ τ≤ nτ,
hL hL LT
which is valid for 0 ≤ tn = nh ≤ T =⇒ stability. Finally, with
τ = M h2 /2,

eLT − 1 M h LT
ken k ≤ M h2 n ≤ (e − 1) = O(h)
2LT 2L
Stiff Equations and Linear Stability Theory
Time-step restrictions for Euler’s method

While Euler’s method converges to the true solution as h → 0, it


can produce exponentially growing errors for finite h.

Example
Consider the IVP

y 0 = f (t, y) = −y, y(0) = 1

with solution y(t) = e−t . Euler’s method with h = 0.5 gives


y(30) ≈ u60 = 9 · 10−19 . But with h = 3 it gives
y(30) ≈ u10 = 1024.

This leads to stability restrictions on the timestep h.


Stiff Initial Value Problems
Example
Consider the system of ODEs
y10 (t) = −2001y1 (t) + 999y2 (t)
y20 (t) = 1998y1 (t) − 1002y2 (t)

with solution
y1 (t) = Ae−3000t + Be−3t
y2 (t) = −Ae−3000t + 2Be−3t

Set A = 0 to get initial conditions (y1 (0), y2 (0)) = (1, 2). Euler’s
method with h = 0.01 then gives y1 (1) ≈ (u1 )100 = −1.5 · 10128 ,
even if the rapid term e−3000t is not present in the solution.

Example
Backward Euler’s method with h = 0.01 gives (u1 )100 = 0.0520,
which is close to the true solution y1 (1) = e−3 = 0.0498.
Stiff Initial Value Problems

Various ways to define stiffness of IVPs, but none is rigorous:

Definition 1
A stiff IVP is one for which we have to use implicit methods to get
accurate results at reasonable cost.

Definition 2
A system of ODEs with high “stiffness ratio”
max |λp |
min |λp |

for the eigenvalues λp of the Jacobian matrix ∂f /∂u, has a large


range of time scales and might therefore be stiff.

Note: Even scalar ODEs can be stiff, for example


y 0 (t) = −L(y − ϕ(t)) + ϕ0 (t), y(0) = y0 with solution
y(t) = e−Lt (y0 − ϕ(0)) + ϕ(t).
Absolute Stability

Consider the scalar model problem

y 0 = λy

for complex λ. Write a one-step method with stepsize h in the form

un+1 = R(z)un

where z = hλ, for some function R(z). The region of absolute


stability (RAS) is then

S = {z ∈ C : |R(z)| ≤ 1}

The method is A-stable if S contains the entire left half-plane


{z ∈ C : Re(z) ≤ 0}. It is A(α)-stable if S contains the sector of
angle α around the negative real axis.
Regions of Absolute Stability

Example
Euler’s method

un+1 = un + hf (un ) = un + hλun = (1 + hλ)un

RAS: |R(z)| = |1 + z| ≤ 1 (inside of a disk of radius 1 centered at


z = −1).

Example
Backward Euler method

un+1 = un + hf (un+1 ) = un + hλun+1 = (1 − hλ)−1 un

RAS: |R(z)| = |(1 − z)−1 | ≤ 1 ⇐⇒ |1 − z| ≥ 1 (outside of a disk


of radius 1 centered at z = 1).
Multistep Methods

For a linear multistep method


r
X r
X
αj un+j = h βj fn+j
j=0 j=0

the RAS is the set of points z such that the roots ζj of the
polynomial
r
X r
X
j
π(ζ; z) = ρ(ζ) − zσ(ζ), ρ(ζ) = αj ζ , σ(ζ) = βj ζ j
j=0 j=0

are less than one in magnitude (or equal to one if not repeated).
More on this later.
Linear Systems of Equations

For a linear system of equations y 0 = Ay, with A a constant


m × m matrix, suppose A is diagonalizable, A = V ΛV −1 , and
introduce w(t) = V −1 y(t):

V −1 y 0 (t) = (V −1 AV )(V −1 y(t))


w0 (t) = Λw(t)

That is, the system decouples into m independent scalar equations


wi0 (t) = λi wi (t).

The same technique can be used on, for example, Euler’s method:

un+1 = un + hAun =⇒ vn+1 = vn + hΛvn

where vn = V −1 un , to see that each eigenvalue hλi must be in the


stability region of the method for stability.
Explicit Runge-Kutta (ERK) Methods
Taylor Series Methods

Consider the IVP y 0 = f (t, y) with y(0) = y0 , and the Taylor series
expansion for y(tn+1 ) about y(tn ):

X hi h2 00
y(tn+1 ) = y (i) (tn ) = yn + hy 0 (tn ) + y (tn ) + · · ·
i! 2
i=0

Note that Euler’s method approximates

y(tn+1 ) ≈ yn + hy 0 (tn ) = yn + hf (tn , y(tn ))

A Taylor series method of order k keeps all terms up to hk :

h2 00 hk
y(tn+1 ) ≈ yn + hy 0 (tn ) + y (tn ) + · · · + y (k) (tn )
2 k!
Write y(tn ) and its derivatives in terms of f and its derivatives:

y(tn ) = f (tn , y(tn ))


y 0 (tn ) = ft + Df · f
..
.

to get e.g. the Taylor series method of order 2:

h2
 
un+1 = un + hf (tn , un ) + ft (tn , un ) + Df (tn , un ) · f (tn , un )
2

Main drawback: Needs derivatives of f


Local Truncation Error

Using the Taylor series expansion with derivative form of the


remainder term, it is clear that

h2 00 hk (k)
 
0
τn = yn+1 − yn + hy (tn ) + y (tn ) + · · · + y (tn )
2 k!
h k+1
= y (k+1) (tn + θn h)
(k + 1)!

for some θn . Therefore,

h(k+1)
kτn k ≤ M = O(hk+1 ) with ky (k+1) k ≤ M
(k + 1)!
Linear Stability

For the scalar test problem y 0 = f (t, y) = λy with λ ∈ C, we get

y 0 = f = λy
y 00 = f 0 = fy y 0 = λ(λy) = λ2 y
..
.
y (k) = λk y

Therefore, for the Taylor series method of order k,

h2 2 hk
un+1 = un + hλun + λ un · · · + λk un = R(hλ)un
2 k!
2 zk
with R(z) = 1 + z + z2 · · · + k! . Note that R(z) → ez as k → ∞,
since for the true solution,

y(tn+1 ) = eλtn+1 = eλ(tn +h) = eλh eλtn = ehλ y(tn )


Convergence of One-Step Methods

A general explicit one-step method can be written

un+1 = un + h · Φ(tn , un , h)

It can be shown, similar to the proof of convergence for Euler’s


method, that if τ (h)/h → 0 as h → 0 and Φ is Lipschitz, the
method is convergent with error O(τ (h)/h).

The Taylor series method of order k has LTE τ (h) = O(hk+1 ), and
is therefore convergent of order k.
Runge-Kutta Methods

General form of Runge-Kutta methods


 
s
X
ki = f tn + ci h, un + h aij kj  , i = 1, . . . , s
j=1
s
X
un+1 = un + h bj kj
j=1

Runge-Kutta tableaux or Butcher array


c A
bT
Explicit Runge-Kutta method if ai,j = 0 when j ≥ i
The Explicit Midpoint Method

The idea of the Runge-Kutta methods is to increase the order of


accuracy by evaluating f (t, y) as several points for each timestep.
For example:
k1 = f (tn , un )
h h
k2 = f (tn + , un + k1 )
2 2
un+1 = un + hk2

is the explicit midpoint method. It is a one-step method, since


h h
un+1 = un + hf (tn + , un + f (tn , un ))
2 2
depends only on tn , un . Its Runge-Kutta tableaux is
0
1 1
2 2
0 1
Explicit/Implicit Trapezoidal Rules

Other examples include the explicit trapezoidal rule (or modified


Euler):

k1 = f (tn , un ) 0
k2 = f (tn + h, un + hk1 ) 1 1
1 1
h 2 2
un+1 = un + (k1 + k2 )
2
and the implicit trapezoidal rule

k1 = f (tn , un )
k2 = f (tn+1 , un+1 ) 0
1 1
1
h 2
1
2
1
= f (tn + h, (k1 + k2 ))
2 2 2
h
un+1 = un + (k1 + k2 )
2
Runge-Kutta Order Bounds

For an explicit Runge-Kutta (ERK) method with s stages and


order of accuracy p:

s≥p for any p


s≥p+1 for p > 4
s≥p+2 for p > 6
s≥p+3 for p > 7

For p > 4, systems of equations may have lower order than scalar
equations.
Autonomization

To simplify the analysis, we remove the time-dependency. This


leads so a consistency requirement on the RK tablaux.
Add the trivial equation
t0 = 1, t(0) = t0

For this equation, all ki = 1, so the time t used at stage i becomes


X s
t = tn + h aij · 1
j=1

Match this with the time used in the non-autonomous RK method:


t = tn + ci h

which gives the requirement


s
X
ci = aij
j=1
Example: General 2-stage ERK

Consider a completely general two-stage explicit Runge-Kutta


method:
k1 = f (un ) 0 0 0
k2 = f (un + hak1 ) a a 0
un+1 = un + h(b1 k1 + b2 k2 ) b1 b2
The LTE at step n becomes
τn = yn+1 − yn − h [b1 f (yn ) + b2 f (yn + haf (yn ))]
h2 00 h3 000
= hyn0 + y + yn + O(h4 )
2 n 6
− h [b1 f + b2 f (yn + haf )]

For simplicity, assume that y is scalar:

y0 = f
y 00 = f 0 = fy y 0 = fy f
y 000 = f 00 = fyy (y 0 )f + fy f 0 = fyy f 2 + fy2 f
Example: General 2-stage ERK

Expand f (yn + haf ) in Taylor series:

h2 a2 2
f (yn + haf ) = f + fy haf + fyy f + O(h3 )
2
and substitute in the LTE and collect equal powers of h:

h2 h3
τn = hf + fy f + (fyy f 2 + fy2 f ) + O(h4 )
2 6
h3
− hb1 f − hb2 f − h2 b2 afy f − b2 a2 f 2 + O(h4 )
2
1
= h(f − b1 f − b2 f ) + h2 ( fy f − b2 afy f )
2
3 1 1 2 1
+ h ( fyy f + fy f − b2 a2 fyy f 2 ) + O(h4 )
2
6 6 2
Example: General 2-stage ERK

Try to eliminate as many terms as possible:

b1 + b2 = 1
1
b2 a =
2
Impossible to eliminate the h3 -term, so τn = O(h3 ) at best. Many
solutions possible:
0 0 0 0 0 0 0 0 0
1/2 1/2 0 1 1 0 3/4 3/4 0
0 1 1/2 1/2 1/3 2/3
But we can choose the scheme that minimizes the h3 -term, by
cancelling two of its terms: 16 = 21 b2 a2 =⇒ b2 a2 = 31 . This leads
to Heun’s method
0 0 0
2 1 3
a = , b1 = , b2 = 2/3 2/3 0
3 4 4
1/4 3/4
Implicit Runge-Kutta (IRK) Methods
Newton’s Method

Newton’s Method
Solve F (x) = 0 iteratively, starting from initial guess x(0) . Taylor
expansion of F (x) about last iterate x(k) with remainder term:

F (x) = F (x(k) ) + DF (x(k) ) · (x − x(k) )+


1 2
D Fij (x − x(k) )i (x − x(k) )j ,
2
where the form of D2 Fij and where it is evaluated is unimportant.
Set F (x) = 0, assume quadratic terms are small, and solve for a
new x ≈ x(k+1) :

DF (x(k) )∆x(k) = −F (x(k) )


x(k+1) = x(k) + ∆x(k)
Newton’s Method, Convergence

Convergence
If f is C 2 and x(0) is “close enough” to x, the iterations converge
quadratically:

kx(k+1) − xk = O(kx − x(k) k2 )

Termination Criteria
Nothing is perfect, but two commonly used criteria for terminating
the iterations and accepting the last iterate are:

k∆x(k+1) k ≤ tol
kF (x(k) )k ≤ tol
Newton’s Method for Backward Euler

Backward Euler: un+1 = un + hf (tn+1 , un+1 ). Nonlinear system


of equations becomes:

F (un+1 ) = un+1 − un − hf (tn+1 , un+1 )

with derivative

DF (un+1 ) = I − hDf (tn+1 , un+1 )

The iterations are:


(k) (k) (k) (k)
(I − hDf (tn+1 , un+1 ))∆un+1 = −(un+1 − un − hf (tn+1 , un+1 ))
(k+1) (k) (k)
un+1 = un+1 + ∆un+1

(0)
Good initial guess: un+1 = un
General Implicit Runge-Kutta Methods (IRKs)

All the stage derivatives ki are coupled:


c1 a11 · · · a1s
.. .. .. ..  
. . . . s
X
cs as1 · · · ass ki = f tn + ci h, un + h aij kj 
j=1
b1 · · · bs
Solve simultaneously for k = (k1 , . . . , ks )T in the equations
T = 0, with
F (k) = (F1 (k), . . . , Fs (k)) 
X s
Fi (k) = ki − f tn + ci h, un + h aij kj 
j=1

This is a system of sN equations (where N is the number of


components in the ODE). Solve using Newton iterations:
∂F −1
 
(j+1) (j)
k =k − F (k (j) )
∂k

where ∂F/∂k is a block matrix with block-entries ∂Fi /∂kj .


Diagonal IRK (DIRK)

Lower triangular RK tablaux:


c1 a11
.. .. ..  
. . . s
X
cs as1 · · · ass ki = f tn + ci h, un + h aij kj 
j=1
b1 ··· bs
Solve

F1 (k1 ) = k1 − f (tn + c1 h, un + ha11 k1 )


F2 (k2 ) = k2 − f (tn + c2 h, un + ha21 k1 + ha22 k2 )
..
.

for s systems of N equations.


Order Conditions for IRKs

Example: Implicit Trapezoidal rule

k1 = f (un ) 0 0 0
1 1
1 2 2
k2 = f (un + h(k1 + k2 )/2) 1 1
2 2
un+1 = un + h(k1 + k2 )/2
Do a Taylor expansion of k1 , k2 , and approximate to first order:
k1 = f
k2 = f + fy h(k1 + k2 )/2 + O(h2 ) = f + O(h)

Insert k1 and approximation of k2 into expresssion for k2 :


k2 = f + fy h(f + f + O(h)) + O(h2 ) = f + hfy f + O(h2 )

The local truncation error is then


τn = yn+1 − yn − h(k1 + k2 )/2
h2 h
= hf + fy f + O(h2 ) − (f + f + hfy f + O(h2 )) = O(h3 )
2 2
Runge-Kutta Order Conditions
Runge-Kutta Order Conditions

It is clear that any Runge-Kutta method can be analyzed using


Taylor series expansions, which will produce terms of the form
y 0 = f, y 00 = fy f
y (3) = fyy f 2 + fy2 f, y (4) = fyyy f 3 + 4fyy fy f 2 + fy3 f

A convenient way to manage the resulting order conditions is by a


graph analogy. Each term is represented by a rooted tree, with the
number of f -derivatives equal to the number of children of a
vertex. For example, fyy fy f 2 corresponds to:
f

fy f

fyy
The γ Function

For each tree t̂, we define the function γ(t̂) as the product of the
order of t̂ and the orders of all possible trees after successively
removing roots. For example:

Tree

Orders 4 2,1 1

gives γ(t̂) = 4 · 2 · 1 · 1 = 8.
Order Conditions

The procedure for finding the conditions that make a Runge-Kutta


method accurate of order at least p is as follows: Consider all trees
t̂ of order ≤ p, and for each tree:

Find γ(t̂)
Assign indices to the tree vertices
k
The order condition is
s
X 1 i j
b` a`,i a`,j ai,k =
`,i,j=1
γ(t̂)
`
that is, b` multiplied by one ai,j for
each edge between two vertices i, j
Example: 2-stage Explicit Runge-Kutta

Graph γ Order Condition


2
X
` 1 b` = b1 + b2 = 1
`=1
i 2
X 1
2 b` a`,i = b2 a21 =
2
` `,i=1

i j 2
X 1
3 b` a`,i a`,j = b2 a221 =
3
` `,i,j=1

j
2
X 1
b` a`,i ai,j = 0 =
i 3·2=6 6
`,i,j=1
Impossible to satisfy
`
Gaussian Quadrature and Collocation Methods
Gaussian Quadrature

Basic idea: Calculate both nodes x1 , . . . , xn and coefficients


w1 , . . . , wn such that
Z b n
X
f (x) dx ≈ wi f (xi )
a i=1

Since there are 2n parameters, we might expect a degree of


precision of 2n − 1
Example: n = 2 gives the rule
1
√ ! √ !
− 3
Z
3
f (x) dx ≈ f +f
−1 3 3

with degree of precision 3


Legendre Polynomials

The Legendre polynomials Pn (x) have the properties


1 For each n, Pn (x) is a polynomial of degree n
R1
2
−1
P (x)Pn (x) dx = 0 when P (x) is a polynomial of degree
less than n
3 Different normalizations possible, below we use Pn (1) = 1
The roots of Pn (x) are distinct, in the interval (−1, 1), and
symmetric with respect to the origin.
Examples:

P0 (x) = 1, P1 (x) = x
1 1
P2 (x) = (3x2 − 1) P3 (x) = (5x3 − 3x)
2 2
Gaussian Quadrature

Theorem
Suppose x1 , . . . , xn are roots of Pn (x) and
n
1
x − xj
Z Y
wi = dx
−1 xi − xj
j6=i

If P (x) is any polynomial of degree less than 2n, then


Z 1 n
X
P (x) dx = wi P (xi )
−1 i=1
Computing Gaussian Quadrature Coefficients

MATLAB Implementation
function [x,c]=gaussquad(n)
%GAUSSQUAD Gaussian quadrature

P=zeros(n+1,n+1);
P([1,2],1)=1;
for k=1:n-1
P(k+2,1:k+2)=((2*k+1)*[P(k+1,1:k+1) 0]- ...
k*[0 0 P(k,1:k)])/(k+1);
end
x=sort(roots(P(n+1,1:n+1)));

A=zeros(n,n);
for i=1:n
A(i,:)=polyval(P(i,1:i),x)’;
end
c=A\[2;zeros(n-1,1)];
The Lagrange Polynomial
Theorem
If x1 , . . . , xn distinct and f given at these numbers, a unique
polynomial P (x) of degree < n exists with

f (xk ) = P (xk ), for each k = 1, 2, . . . , n

The polynomial is
n
X
P (x) = f (x1 )Ln,1 (x) + . . . + f (xn )Ln,n (x) = f (xk )Ln,k (x)
k=1

where
(x − x1 )(x − x2 ) · · · (x − xk−1 )(x − xk+1 ) · · · (x − xn )
Ln,k (x) =
(xk − x1 )(xk − x2 ) · · · (xk − xk−1 )(xk − xk+1 ) · · · (xk − xn )
Y (x − xi )
=
(xk − xi )
i6=k
Collocation

Integrate the IVP y 0 = f (t, y) from tn to t and apply the


fundamental theorem of calculus to obtain
Z t
y(t) = yn + f (t, y(t)) dt
tn

Introduce s collocation points t(i) = tn + ci h with ci ∈ [0, 1].


Approximate f (t, y(t)) ≈ f˜(t) where f˜(t) is the unique
polynomial of degree s − 1 interpolating the s interpolation
points (t(i) , f (t(i) , y(t(i) ))):
s
X
f˜(t) = Ls,j (t)f (t(j) , y(t(j) ))
j=1
Collocation

Use f˜ to define an approximate solution u(t):


s
Z tX
u(t) = yn + Ls,j (t)f (t(j) , u(t(j) )) dt
tn j=1
s
X Z t
= yn + f (t(j) , u(t(j) )) Ls,j (t) dt
j=1 tn

Introduce stages ki , i = 1, . . . , s, and insert u(t):


 
Xs Z t(i)
ki = f (t(i) , u(t(i) )) = f t(i) , yn + kj Ls,j (t) dt
j=1 tn
 
s
X
= f t(i) , yn + h ai,j kj 
j=1
Collocation

Similarly, the approximate solution at time t(i+1) is


s
X Z tn+1 s
X
un+1 = yn + kj Ls,j (t) dt = yn + h bj kj
j=1 tn j=1

This is an implicit Runge-Kutta scheme, with coefficients


Z t(i)
1
ai,j = Ls,j (t) dt
h tn
Z tn+1
1
bi = Ls,j (t) dt
h tn

Choose ci as Gaussian Quadrature nodes (on the interval


[0, 1]) to obtain order 2s
Example: The Implicit Midpoint Rule

s = 1, Gaussian Quadrature point c1 = 12 , Lagrange


polynomial L1,1 (t) = 1
Coefficients
Z tn + h
1 2 1
a1,1 = L1,1 dt =
h tn 2
Ztn +h
1
b1 = L1,1 (t) dt = 1
h tn

Scheme:
1 1
2 2
1
Example: Hammer-Hollingsworth

1 3
s = 2, Gaussian Quadrature points c1 = 2 − 6 ,

1 3
c2 = 2 + 6 , Lagrange polynomials

t − t(2) t − t(1)
L2,1 (t) = , L2,2 (t) =
t(1) − t(2) t(2) − t(1)
Coefficients
Z tn +c1 h
1 1
a1,1 = L2,1 dt = · · · =
h tn 4
Z tn +c2 h

1 1 3
a1,2 = L2,1 dt = · · · = +
h tn 4 6
Z tn +c1 h √
1 1 3
a2,1 = L2,2 dt = · · · = −
h tn 4 6
Z tn +c2 h
1 1
a2,2 = L2,2 dt = · · · =
h tn 4
Example: Hammer-Hollingsworth

Coefficients (cont’d)
Z tn +h
1 1
b1 = L2,1 (t) dt =
h tn 2
Z tn +h
1 1
b2 = L2,2 (t) dt =
h tn 2

Scheme: √ √
1 3 1 1 3
2 − √6 4√ 4 − 6
1 3 1 3 1
2 + 6 4 + 6 4
1 1
2 2
Example: Three-stage Gauss Legendre
p
s = 3, Gaussian Quadrature
p points c1 = (1 − 3/5)/2,
c2 = 1/2, c3 = (1 + 3/5)/2, Lagrange polynomials

(t − t(2) )(t − t(3) )


L3,1 (t) =
(t(1) − t(2) )(t(1) − t(3) )
(t − t(1) )(t − t(3) )
L3,2 (t) =
(t(2) − t(1) )(t(2) − t(3) )
(t − t(1) )(t − t(2) )
L3,3 (t) =
(t(3) − t(1) )(t(3) − t(2) )

Scheme: √ √ √
1 15 5 2 15 5 15
2 − 10 36√ 9 − 15 36 − √30
1 5 15 2 5 15
2√ 36 + −
√24 9√ 36 24
1 15 5 15 2 15 5
2 + 10 36 + 30 9 + 15 36
5 4 5
18 9 18
A,L,B-stability of Runge-Kutta Methods
Linear Stability Analysis of RK Methods

General Runge-Kutta method


 
s
X s
X
ki = f tn + ci h, un + h aij kj  , un+1 = un + h bj kj
j=1 j=1

Apply on scalar linear test problem y 0 = λy and solve for


k = [k1 , k2 , . . . , ks ]T : (e = [1, 1, . . . , 1]T )
 
Xs
ki = λ un + h aij kj 
j=1

k = λun e + hλAk = (I − hλA)−1 λun e


un+1 = un + hbT k = 1 + hλbT (I − hλA)−1 e un


R(z) = 1 + zbT (I − zA)−1 e


Linear Stability Analysis of RK Methods

By Cramer’s rule,

adj(I − zA)
(I − zA)−1 =
det(I − zA)

with

adj(I − zA) ∈ Ps−1


det(I − zA) ∈ Ps ,

where Pn is the space of polynomials of degree n.


Therefore, in general R(z) is a rational function

R ∈ Ps/s

For an explicit RK method, det(I − zA) = 1 and

R ∈ Ps
A-stability of RK Methods

No explicit RK method can be A-stable or A(α)-stable, since


R ∈ Ps and R(0) = 1, and no polynomial is bounded by 1 in
C − (except a constant)
Can be shown that all Gauss-Legendre implicit RK methods
are A-stable

Lemma
Let R be an arbitrary rational function that is not a constant.
Then |R(z)| < 1 for all z ∈ C − if and only if all the poles of R
have positive real parts and |R(it)| ≤ 1 for all t ∈ R.
L-Stability

Definition
A one-step method is L-stable if it is A-stable and
limz→∞ |R(z)| = 0

P (z)
For R(z) = Q(z) , L-stability requires deg(P ) < deg(Q).

Examples
1
Backward Euler, R(z) = 1−z → 0 as z → ∞ =⇒ L-stable

Implicit Midpoint, R(z) = 1+z/2


1−z/2 → −1 as z → ∞
=⇒ not L-stable
z 2 +6z+12
Hammer-Hollingsworth, R(z) = z 2 −6z+12
→ 1 as z → ∞
=⇒ not L-stable
5
1+ 12 z
TR-BDF2, R(z) = 7 1 2
1− 12 z+ 12 z
→ 0 as z → ∞ =⇒ L-stable
Nonlinear Stability
Definition
A function f is dissipative if (f (t, y) − f (t, z))T (y − z) ≤ 0 for all
y and z.

Definition
An ODE is contractive if ky(t) − z(t)k ≤ ky(s) − z(s)k for every
pair of solutions y and z when t ≥ s.

Every ODE with a dissipative right-hand side f is contractive

Definition
A numerical method is B-stable (or contractive) if every pair of
numerical solutions u and v satisfy kun+1 − vn+1 k ≤ kun − vn k for
all n, when solving an IVP with a dissipative f .

A B-stable method is also A-stable


B-Stability of RK

Definition
A RK method is algebraically stable if the matrices

B = diag(b1 , . . . , bs ), M = BA + AT B T − bbT

are nonnegative semidefinite (xT M x ≥ 0 for any x).

An algebraically stable RK method is B-stable.

Examples
Implicit midpoint method, A = 1/2, b = 1, B = 1 is
nonnegative semidefinite,
M = 1 · 1/2 + 1/2 · 1 − 1 · 1 = 1 − 1 = 0 is nonnegative
semidefinite =⇒ Algebraically stable =⇒ B-stable.
Linear Multisteps Methods, Adams Methods
Linear Multistep Methods

An r-step linear multistep method has the form


r
X r
X
αj un+j = h βj f (tn+j , un+j )
j=0 j=0

or, with a shift by r − 1,

αr un+1 + αr−1 un + · · · + α0 un−r+1 =


h (βr fn+1 + βr−1 fn + · · · + β0 fn−r+1 )

Normalize by setting αr = 1
Explicit if βr = 0
Adams Methods

Integrate y 0 = f (t, y(t)) and use the fundamental theorem of


calculus:
Z tn+1
y(tn+1 ) = y(tn ) + f (t, y(t)) dt
tn

Approximate by polynomial p(t) ≈ f (t, y(t)) and integrate

Adams-Bashforth Methods
Let p(t) interpolate fn , fn−1 , . . . , fn−r+1
r = 1: p(t) = fn =⇒ un+1 = un + hfn
r = 2: p(t) = fn tt−tn−1
n −tn−1
t−tn
+ fn−1 tn−1 −tn
t
(t − tn−1 )2 (t − tn )2 n+1

un+1 = un + fn − fn−1
2h 2h tn
3 1
= un + hfn − hfn−1
2 2
Adams-Bashforth Methods

1-step: un+1 = un + hf (tn , un )


h
2-step: un+1 = un + [3f (tn , un ) − f (tn−1 , un−1 )]
2
h
3-step: un+1 = un + [23f (tn , un ) − 16f (tn−1 , un−1 )
12
+ 5f (tn−2 , un−2 )]
h
4-step: un+1 = un + [55f (tn , un ) − 59f (tn−1 , un−1 )
24
+ 37f (tn−2 , un−2 ) − 9f (tn−3 , un−3 )]

Local truncation error tn (h) = O(hr+1 )


Linear Multistep Methods

Adams-Moulton Methods
Let p(t) interpolate fn+1 , fn , fn−1 , . . . , fn−r+1
t−tn t−tn+1
r = 1: p(t) = fn+1 tn+1 −tn + fn tn −tn+1
t
(t − tn )2 (t − tn+1 )2 n+1

un+1 = un + fn+1 − fn
2h 2h tn
1 1
= un + hfn+1 + hfn (Trapezoidal)
2 2
(t−tn )(t−tn−1 )
r = 2: p(t) = fn+1 (tn+1 −tn )(tn+1 −tn−1 ) +
fn (t(t−tn+1 )(t−tn−1 )
n −tn+1 )(tn −tn−1 )
(t−tn+1 )(t−tn )
+ fn−1 (tn−1 −tn+1 )(tn−1 −tn )
Z tn+1
un+1 = un + p(t) dt = (Maple) =
tn
5 2 1
= un + hfn+1 + hfn − hfn−1
12 3 12
Adams-Moulton Methods

h
1-step: un+1 = un + [f (tn+1 , un+1 ) + f (tn , un )]
2
h
2-step: un+1 = un + [5f (tn+1 , un+1 ) + 8f (tn , un )
12
− f (tn−1 , un−1 )]
h
3-step: un+1 = un + [9f (tn+1 , un+1 ) + 19f (tn , un )
24
− 5f (tn−1 , un−1 ) + f (tn−2 , un−2 )]
h
4-step: un+1 = un + [251f (tn+1 , un+1 ) + 646f (tn , un )
720
− 264f (tn−1 , un−1 ) + 106f (tn−2 , un−2 )
− 19f (tn−3 , un−3 )]

Local truncation error tn (h) = O(hr+2 )


Backward Differentiation Formulae Methods
BDF Methods

Let p(t) interpolate un+1 , un , un−1 , . . . , nn−r+1


Impose p0 (tn+1 ) = f (tn+1 , un+1 )

BDF1 (Backward Euler

t − tn t − tn+1
p(t) = un+1 + un
tn+1 − tn tn − tn+1
1 1
p0 (t) = un+1 − un
h h
p0 (tn+1 ) = f (tn+1 , un+1 ) =⇒
un+1 = un + hf (tn+1 , un+1 )
BDF2

(t − tn )(t − tn−1 )
p(t) = un+1
(tn+1 − tn )(tn+1 − tn−1 )
(t − tn+1 )(t − tn−1 )
+ un
(tn − tn+1 )(tn − tn−1 )
(t − tn+1 )(t − tn )
+ un−1
(tn−1 − tn+1 )(tn−1 − tn )
1
p0 (t) = un+1 2 (t − tn + t − tn−1 )
2h
1
− un 2 (t − tn+1 + t − tn−1 )
h
1
+ un+1 2 (t − tn+1 + t − tn )
2h
0 3 2 1
p (tn+1 ) = un+1 − un + un−1 = f (tn+1 , un+1 ) =⇒
2h h 2h
4 1 2
un+1 = un − un−1 + hf (tn+1 , un+1 )
3 3 3
Backward Differentiation Formula (BDF) Methods

1-step: un+1 =un + hf (tn+1 , un+1 )


2-step: 3un+1 =4un − un−1 + 2hf (tn+1 , un+1 )
3-step: 11un+1 =18un − 9un−1 + 2un−2 + 6hf (tn+1 , un+1 )
4-step: 25un+1 =48un − 36un−1 + 16un−2 − 3un−3
+ 12hf (tn+1 , un+1 )
5-step: 137un+1 =300un − 300un−1 + 200un−2 − 75un−3
+ 12un−4 + 60hf (tn+1 , un+1 )
6-step: 147un+1 =360un − 450un−1 + 400un−2 − 225un−3
+ 72un−4 − 10un−5 + 60hf (tn+1 , un+1 )
BDF Local Truncation Errors

By design or by Taylor expansions,


n
X n
X
τn (h) = αj yn+j − h βj f (tn+j , yn+j ) = O(hr+1 )
j=0 j=0
Order and Convergence of Multistep Methods
Zero-stability of LMMs

Consider linear multistep methods of the form (αr = 1):


r
X r
X
αj un+j = h βj f (tn+j , un+j )
j=0 j=0

Example:

un+2 − 3un+1 + 2un = −hf (tn , un )

Use Taylor expansions to obtain local truncation error:

τ = yn+2 − 3yn+1 + 2yn + hfn =


4h2 00
= yn + 2hu0n + y + O(h3 )
2 n
h2
− 3(yn + hyn0 + yn00 + O(h3 )) + 2yn + hyn0
2
2
= O(h ) =⇒ Consistent
Unstable Schemes

The scheme below is consistent

un+2 − 3un+1 + 2un = −hf (tn , un )

However, consider the simple test problem y 0 = 0, y(0) = 0


Suppose the two initial values are u0 = 0, u1 = h, which
converge to the true solution y(t) = 0 as h → 0
The solution grows exponentially in the number of steps n:

u0 = 0
u1 = h
u2 = 3h
u3 = 7h
..
.
un = (2n − 1)h
Linear Difference Equations

Pr
Consider the homogeneous problem j=0 αj un+j =0
Suppose un = ζn and plug into the scheme:
r
X r
X
n+j
αj ζ = 0 =⇒ αj ζ j = 0
j=0 j=0

Thus, ζ must be a root of the polynomial


r
X
ρ(ζ) = αj ζ j = αr (ζ − ζ1 )(ζ − ζ2 ) · · · (ζ − ζr )
j=0

Any linear combination is also a solution, so if all ζi distinct:

un = c1 ζ1n + c2 ζ2n + · · · + cn ζrn

for some constants ci


Matching the Initial Conditions

The coefficients ci can be determined from the initial


conditions u0 , u1 , . . . , ur−1 :

c1 + c2 + · · · + cr = u0
c1 ζ1 + c2 ζ2 + · · · + cr ζr = u1
..
.
c1 ζ1r−1 + c2 ζ2r−1 + · · · + cr ζrr−1 = ur−1
Example

For

un+2 − 3un+1 + 2un = −hf (tn , un )

the polynomial is

ρ(ζ) = ζ 2 − 3ζ + 2 = (ζ − 1)(ζ − 2) =⇒ ζ1 = 1, ζ2 = 2
=⇒ un = c1 + c2 2n

Match the initial conditions u0 = 0, u1 = h:



c1 + c2 = 0
=⇒ c2 = h, c1 = −h
c1 + 2c2 = h
=⇒ un = h(2n − 1)

It is clear that if any |ζi | > 1, there is in general no


convergence
Repeated Roots

If a root is repeated, for example ζ1 = ζ2 = ζ3 , the solution is

un = c1 ζ1n + c2 nζ1n + c3 n2 ζ1n + c4 ζ4n + · · · + cr ζrn

Example: un+2 − 2un+1 + un = 0

ρ(ζ) = ζ 2 − 2ζ + 1 = (ζ − 1)2 =⇒ ζ1 = ζ2 = 1
=⇒ un = c1 + c2 n = u0 + (u1 − u0 )n

No convergence, even if |ζ1 | = 1 ≤ 1, since repeated


Example: un+3 − 2un+2 + 54 un+1 − 14 un = 0

5 1
ρ(ζ) = ζ 3 − 2ζ 2 + ζ − = (ζ − 1)(ζ − 0.5)2
4 4
=⇒ un = c1 + c2 0.5n + c3 n0.5n → c1 as n → ∞

Convergence since |ζ2 | = |ζ3 | = 0.5 < 1 for repeated root


Zero-Stability

Definition
An r-step Linear Multistep Method (LMM) is zero-stable if the
roots of the polynomial ρ(ζ) satisfy:

1) |ζj | ≤ 1
2) |ζj | < 1 if ζj is repeated

Theorem Dahlquist Equivalence Theorem


For LMMs applied to the IVP y 0 (t) = f (t, y(t)),

consistency + zero-stability ⇐⇒ convergence


Zero-Stability

Example: The one-step method

un+1 = un + hβ1 fn+1 + hβ0 fn

has ρ(ζ) = ζ − 1 =⇒ ζ1 = 1 =⇒ zero-stable


=⇒ convergent if consistent
The Adams methods have the form
r
X
un+r = un+r−1 + h βj fn+j
j=0

and

r r−1 r ζ1 = 1,
ρ(ζ) = ζ − ζ = (ζ − 1)ζ =⇒
ζ2 = ζ3 = · · · = 0,

=⇒ convergent if consistent
Zero-Stability

Example: The BDF2 method un+2 − 43 un+1 + 13 un = 23 hfn+1

4 1 1
ρ(ζ) = ζ 2 − ζ + = (ζ − 1)(ζ − ) =⇒ zero-stable
3 3 3
=⇒ convergent if consistent
Can be shown that the BDF methods are zero-stable when
r≤6

Theorem First Dahlquist barrier


A zero-stable and linear r-step multistep method cannot attain an
order of convergence greater than r + 1 if r is odd and greater
than r + 2 if r is even. If the method is also explicit, then it
cannot attain an order greater than r.
Stability Regions for LMMs

Plug the scalar test equation y 0 = λy into the LMM:


r
X r
X r
X
αj un+j = h βj fn+j = hλβj un+j
j=0 j=0 j=0

or, with z = hλ,


r
X
(αj − zβj )un+j = 0
j=0

Difference equation with characteristic polynomial

π(ζ; z) = ρ(ζ) − zσ(ζ)

where ρ(ζ) = rj=0 αj ζ j and σ(ζ) = rj=0 βj ζ j


P P
Stability Regions for LMMs

Requiring that a solution is bounded for any set of initial


values u0 , u1 , . . . , ur−1 gives the definition of absolute
stability:

Definition
The region of absolute stability for an LMM is the set of points z
in the complex plane for which the roots ζj of the polynomial
π(ζ; z) = ρ(ζ) − zσ(ζ) satisfy

1) |ζj | ≤ 1
2) |ζj | < 1 if ζj is repeated

Theorem Second Dahlquist barrier


There are no explicit A-stable linear multistep methods. The
implicit ones have order of convergence at most 2.
Stepsize Control and Embedded RK Methods
Error Estimation

Consider two multistep schemes of order p:


s
X s
X
aj un+j = h bj f (tn+j , un+j )
j=0 j=0
Xs Xs
ãj wn+j = h b̃j f (tn+j , wn+j )
j=q j=q

where q ≤ s − 1.
The local truncation errors are

τ = yn+s − un+s = chp+1 h(p+1) (tn+s ) + O(hp+2 )


τ̃ = yn+s − wn+s = c̃hp+1 h(p+1) (tn+s ) + O(hp+2 )
Error Estimation

Neglegt O(hp+2 ) terms and subtract:

wn+s − un+s ≈ (c − c̃)hp+1 y (p+1) (tn+s )

This gives
1
hp+1 y (p+1) (tn+s ) ≈ (wn+s − un+s )
c − c̃
Plug into local truncation error to get the Milne Device
c
τ≈ (wn+s − un+s )
c − c̃
Example: TR-AB2

Use AB2 to monitor the error in the Trapezoidal rule:


h
TR: un+1 = un + (fn+1 + fn )
2
h
AB2: wn+1 = wn + (3fn − fn−1 )
2
Find local truncation errors
h2 00 h3 000

h 0
τ = y + hy 0 + y + y −y− y + hy 00
2 6 2
h2 000

1
+ h + y 0 + O(h4 ) = − h3 y 000 + O(h4 )
2 12
h2 00 h3 000 3h 0
τ̃ = y + hy 0 + h + h −y− y
2 6 2
2
 
h 0 h 000 5 3 000
+ y − hy 00 + h + O(h4 ) = h y + O(h4 )
2 2 12

With c = −1/12 and c̃ = 5/12, the error estimation becomes


c
τ≈ (wn+1 − un+1 ) = 6(wn+1 − un+1 )
c − c̃
Example: AM3-AB4

Use Adams-Bashforth 4 to monitor the error in


Adams-Moulton 3
19 5 (5)
AM3: τ =− h y + O(h6 )
720
251 5 (5)
AB4: τ̃ = h y + O(h6 )
720
c 19
τ≈ (wn+1 − un+1 ) = (wn+1 − un+1 )
c − c̃ 270
Stepsize Control Strategy

Control error per unit step, τ ≤ δh for given tolerance δ


At step n, estimate τn
If τn > δh
Reject step
Set h ← h/2
If τn ≤ δh
Accept step
If τn ≤ hδ/10
Set h ← 2h
For step doubling, pick every second old data
For step halving, interpolate unknown data
Embedded Runge-Kutta Methods

Use two different Runge-Kutta Methods, of order p and p + 1

un+1 = yn+1 + `hp+1 + O(hp+2 )


ûn+1 = yn+1 + O(hp+2 )

Neglect HOTs and subtract to get

`hp+1 ≈ un+1 − ûn+1

The error estimate is then simply

τ ≈ un+1 − ûn+1
Embedded Runge-Kutta Methods

To minimize the cost of the error estimator, find two different


methods with the same RK array A
c A
bT
b̂T
The stages ki are then equal for the two methods, and the
only additional cost for the error is an extra linear combination
The error estimate is
s
X
En = h (bj − b̂j )kj
j=1
Example: Euler-TR ERK

Same stages Solutions


k1 = f (un ) un+1 = un + hk1
k2 = f (un + hk1 ) ûn+1 = un + hk1 /2 + hk2 /2
Error estimate
h
En+1 = un+1 − ûn+1 = (k1 − k2 ) = C1 h2 + O(h3 )
2
Extended Butcher array
0 0 0
1 1 0
1 0
1/2 1/2
Example: 3-stage explicit RK and 2-stage Midpoint

Extended Butcher array


0 0 0 0
1/2 1/2 0 0
1 -1 2 0
1/6 2/3 1/6
0 1 0
Error estimate
h h h
En = − k1 + k2 − k3 = O(h3 )
6 3 6
Finite Difference Approximations
Finite Difference Approximations

u(x̄ + h) − u(x̄) h
D+ u(x̄) = = u0 (x̄) + u00 (x̄) + O(h2 )
h 2
u(x̄) − u(x̄ − h) 0 h 00
D− u(x̄) = = u (x̄) − u (x̄) + O(h2 )
h 2
u(x̄ + h) − u(x̄ − h) h2
D0 u(x̄) = = u0 (x̄) + u000 (x̄) + O(h4 )
2h 6
u(x̄ − h) − 2u(x̄) + u(x̄ + h)
D2 u(x̄) =
h2
h2
= u00 (x̄) + u0000 (x̄) + O(h4 )
12
Method of Undetermined Coefficients

Find approximation to u(k) (x̄) based on u(x) at x1 , x2 , . . . , xn


Write u(xi ) as Taylor series centered at x̄:
1
u(xi ) = u(x̄) + (xi − x̄)u0 (x̄) + · · · + (xi − x̄)k u(k) (x̄) + · · ·
k!
Seek approximation of the form

u(k) (x̄) = c1 u(x1 ) + c2 u(x2 ) + · · · + cn u(xn ) + O(hp )

Collect terms multiplying u(x̄), u0 (x̄), etc, to obtain:


n
(
1 X
(i−1) 1 if i − 1 = k
cj (xj − x̄) =
(i − 1)! 0 otherwise.
j=1

Nonsingular Vandermonde system if xj are distinct


Convergence of BVPs, Nonuniform Grids
The Finite Difference Method

Consider the Poisson equation with Dirichlet conditions:

u00 (x) = f (x), 0 < x < 1, u(0) = α, u(1) = β

Introduce n uniformly spaced grid points xj = jh,


h = 1/(n + 1)
Set u0 = α, un+1 = β, and use the three-point difference
approximation to get the discretization
1
(uj−1 − 2uj + uj+1 ) = f (xj ), j = 1, . . . , n
h2
This can be written as a linear system Au = f with
f (x1 ) − α/h2
     
−2 1 u1
1 1
 −2 1   u2   f (x2 ) 
A= 2 u= .  f =
    
.. ..
h   .. 
 
.   . 
1 −2 un f (xn ) − β/h2
Errors and Grid Function Norms

The error e = u − û where u is the numerical solution and û is


the exact solution
 
u(x1 )
û =  ... 
 

u(xn )

Measure errors in grid function norms, which are


approximations of integrals and scale correctly as n → ∞

kek∞ = max |ej |


j
X
kek1 = h |ej |
j
 1/2
X
kek2 = h |ej |2 
j
Local Truncation Error

Insert the exact solution u(x) into the difference scheme to


get the local truncation error:
1
τj = (u(xj−1 ) − 2u(xj ) + u(xj+1 )) − f (xj )
h2
h2
= u00 (xj ) + u0000 (xj ) + O(h4 ) − f (xj )
12
h2 0000
= u (xj ) + O(h4 )
12
or
 
τ1
 .. 
τ =  .  = Aû − f
τn
Errors

Linear system gives error in terms of LTE:



Au = f
=⇒ Ae = −τ
Aû = f + τ

Introduce superscript h to indicate that a problem depends on


the grid spacing, and bound the norm of the error:

Ah eh = −τ h
eh = −(Ah )−1 τ h
keh k = k(Ah )−1 τ h k ≤ k(Ah )−1 k · kτ h k

If k(Ah )−1 k ≤ C for h ≤ h0 , then

keh k ≤ C · kτ h k → 0 if kτ h k → 0 as h → 0
Stability, Consistency, and Convergence

Definition
A method Ah uh = f h is stable if (Ah )−1 exists and
k(Ah )−1 k ≤ C for h ≤ h0
It is consistent with the DE if kτ h k → 0 as h → 0
It is convergent if keh k → 0 as h → 0

Theorem Fundamental Theorem of Finite Difference Methods


Consistency + Stability =⇒ Convergence

since keh k ≤ k(Ah )−1 k · kτ h k ≤ C · kτ h k → 0. A stronger


statement is

O(hp ) LTE + Stability =⇒ O(hp ) global error


Stability in the 2-Norm

In the 2-norm, we have

kAk2 = ρ(A) = max |λp |


p
1
kA−1 k2 =
minp |λp |

For our model problem matrix, we have explicit expressions for


the eigenvectors/eigenvalues:
 
−2 1
upj = sin(pπjh)
 1 −2 1
1  
A= 2

h  ..  2
.  λp = 2 (cos(pπh) − 1)
1 −2 h
The smallest eigenvalue is
2
λ1 = (cos(πh) − 1) = −π 2 + O(h2 ) =⇒ Stability
h2
Convergence in the 2-Norm

This gives a bound on the error


1 h
keh k2 ≤ k(Ah )−1 k2 · kτ h k2 ≈ kτ k2
π2
h2 0000
Since τjh ≈ 12 u (xj ),

h2 0000 h2 00
kτ h k2 ≈ ku k2 = kf k2 =⇒ keh k2 = O(h2 )
12 12
While this implies convergence in the max-norm, 1/2 order is
lost because of the grid function norm:
1
keh k∞ ≤ √ keh k2 = O(h3/2 )
h

But it can be shown that k(Ah )−1 k∞ = O(1), which implies


keh k∞ = O(h2 )
Neumann Boundary Conditions

Consider the Poisson equation with Neumann/Dirichlet


conditions:

u00 (x) = f (x), 0 < x < 1, u0 (0) = σ, u(1) = β

Various options for discretizing the Neumann condition:


u1 −u0
1) First-order finite difference approximation h :
    
−h h u0 σ
 1 −2 1   u1   f (x1 ) 
1  .
  
  ..   .. 

 . .  .  =  . 
h2 

   
 1 −2 1   un  f (xn )
0 h2 un+1 β

But τ0 = h12 (hu(x1 ) − hu(x0 )) − σ = h2 u00 (x0 ) + O(h2 ) =⇒


Global error = O(h)
Neumann Boundary Conditions

2) Introduce extra point x−1 outside domain, enforce equation


at x0 and add central difference approximation:
1
(u−1 − 2u0 + u1 ) = f (x0 )
h2
1
(u1 − u−1 ) = σ
2h
Elimination of u−1 gives

1 h
(−u0 + u1 ) = σ + f (x0 )
h 2
Same matrix structure as in 1), but with “correction term”
h
2 f (x0 )
Neumann Boundary Conditions

3) Second-order accurate one-sided difference approximation:


 
1 3 1
u0 − 2u1 + u2 = σ
h 2 2

 3h h
   
2 −2h 2 u0 σ
1 −2 1  u1  f (x1 ) 
1 
..
  
..
  .. 

= . 
  
. .
h2
 
    
 1 −2 1   un  f (xn )
0 h2 un+1 β

Most general approach.


General Second-Order Linear BVP

Consider a general linear equation with Dirichlet conditions:

a(x)u00 (x) + b(x)u0 (x) + c(x)u(x) = f (x),


u(a) = α, u(b) = β

Discretize using second order approximations:


   
ui−1 − 2ui + ui+1 ui+1 − ui−1
ai + bi + ci ui = fi
h2 2h
Boundary Layers

Consider a steady-state advection-diffusion equation

u00 (x) − u0 (x) = f (x)

with Dirichlet conditions u(0) = α, u(1) = β


If  is small, the differential equation is almost first-order, and
a boundary layer will form (near the right boundary)

2.8 ε=1e−1
ε=3e−2
2.6
ε=1e−2
2.4

2.2

1.8

1.6

1.4

1.2

1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Boundary Layers

Discretized using centered second-order approximations,


numerical oscillations appear if the boundary layer is
under-resolved
Need nonuniform grids and/or stabilized numerical schemes

2.8

2.6
h=0.05
2.4 h=0.025

2.2

1.8

1.6

1.4

1.2

1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

You might also like