Runge-Kutta Methods
Runge-Kutta Methods
RungeKutta methods Vectorisation of ODEs Adaptive stepsize Subjects not covered Summary
dy = f (x , y ) dx Take a trial step to the midpoint: ymid = yn + f (xn ) 2 [Looks like Euler!]
Error analysis
1
Error in derivative dy + O(2 ) dx y true (xn+1 ) y true (xn ) = f (xmid , y true (xmid )) + O(2 ) S xy =
=
2
Error in f (xmid , ymid ) f (xmid , ymid ) = f (xmid , y true (xmid ) + O(2 )) = f true + O(2 )
4 5
RungeKutta methods
Midpoint method: Take a trial step to evaluate rhs f (x , y ) at midpoint improved accuracy There are many ways of evaluating f (x , y ) that agree to leading order. Use this to eliminate higher order error terms
Why RungeKutta?
RK4 is preferable over Euler (midpoint) if improved accuracy allows us to take 4 (2) times longer steps to compensate for more function evaluations. This is usually the case (but not always) RungeKutta does not guarantee stability, but
1 2 3
Accuracy is sometimes more important (short runs) Instability can be delayed by higher accuracy RungeKutta methods work for all types of ODEs. They can be made into black-box ODE solvers
dyN
dt
= y2 = y3 . . .
[ y (t )] [ y (t )]
= f (y1 , t )
dr dt dr dt d dt d dt
= y2 2 = y1 y3 = y3
2 y4 = 2 yy 1
GM 2 y1
We can write the Euler algorithm (and midpoint and RK4) in vector form:
dy1 dx dy2 dx dyn dx
= f1 (x , y1 , . . . , yn ) = f2 (x , y1 , . . . , yn ) ... = fn (x , y1 , . . . , yn )
dy = f (x , y) dx
Pendulum
The equation of motion for a simple pendulum is d 2 g = sin 2 dt
= = sin
dy = d
sin
y2 sin y1
MatLab implementation
function dy = pendulum(t,y) dy(1) = y(2); dy(2) = -sin(y(1));
Midpoint method
for n=1:N-1 ymid = y(n,:) + 0.5*dt*pendulum(t(n),y(n,:)); y(n+1,:) = y(n,:) + dt*pendulum(t(n)+0.5*dt,ymid); end
Near perihelion: fast motion r (t ), (t ) vary rapidly need small time steps Near aphelion: slow motion can aord bigger time steps
Adaptive stepsize
Let computer decide stepsize depending on what precision we want Estimate error at current t Increase t when error estimate too small Decrease t when error estimate too big
Adaptive stepsize
What is too small, too big?
A xed number is not useful: in Kepler problem is in radians but r could be in metres! Try to keep r r relative tolerance can become zero, so we also need absolute tolerance > min Use physical insight! The optimal step size can be estimated using Taylor expansion: topt = t rel
1/N
N = order of algorithm
Sti sets of equations Implicit methods Richardson extrapolation Boundary value problems next!
Summary
Adaptive stepsize
Allows us to vary the stepsize locally during integration Based on estimating the error in the algorithm
Vectorising coupled ODEs and using function handles allows us to write universal ODE solvers