Ordinary Differential Equations
Ordinary Differential Equations
Now that we have studied differentiation and integration in isolation, lets look at applying these concepts to differential equations. One of the most famous ordinary differential equations is Newtons Second Law:
d 2x d x v or m 2 =F = F /m dt v dt where x(t ) is position and v(t ) is velocity. By integrating the differential equation, we may nd the velocity and position at any time T : v(T ) = v(t = 0) + x(T ) = x(t = 0) +
Z T F
0 m
dt
Z T
0
v(t ) dt
The differential equation describes the rate of change of the variables with respect to time.
y = f (t , y)
with
y(t ) = y0 +
f (t , y) dt
Instead of trying to compute the value of y for all t [a, b], we choose a set of evenly spaced points in time tk = a + h k where h = (b a)/N so that:
y(a + h) = y(a) +
Z a+h
a
f (t , y) dt
In general:
y(tk + h) = y(tk ) +
Z t +h k
tk
f (t , y) dt
(t tk )2 y(t ) = y(tk ) + (t tk ) y (tk ) + y (c) 2 But the differential equation tells us that y (tk ) = f (tk , yk ) so that: (t tk )2 y (c) y(t ) = y(tk ) + (t tk ) f (tk , yk ) + 2 Therefore, we can compute y(tk + h) = y(tk+1) from: y(tk+1) = y(tk ) + (tk+1 tk ) f (tk , yk ) = y(tk ) + h f (tk , yk )
=h
Error
As in numerical integration, there are two types of discretization error:
yk
approx.
Z t k
tk1
f (t , y) dt
approx.
h2 k = y(tk ) (yk1 + h f (tk , yk )) = y (c) local truncation error 2 Summing the error over the N intervals between a and b yield the global error: h2 Nh2 (b a)h ek = y (c) y (c1) = y (c1) = O(h) 2 2 2 k=1
Therefore, for Eulers method, ek = O(h) and k = O(h2). The total error for Eulers method is:
N
E = |y(b) yN | =
Multistep Methods
Eulers method is simple but not very accurate. To improve the method, the integral R tk+1 f (t , y) dt must be approximated using f (t , y) at more time locations. tk Multistep methods accomplish this by including f (tk1, yk1), f (tk2, yk2), . . . Say that we know the value of y(t ) at tk1 and tk . Lets use those values of y to compute y(tk+1):
y(tk+1) = y(tk ) +
Z t k+1
tk
f (t , y) dt
Our job is to approximate the integral of f (t , y) between tk and tk+1. What to do? Interpolate f (t , y) using the values of f at tk1 and tk . Integrate the interpolant from tk to tk+1. Lets do it. Interpolate:
(t tk1) (t tk ) P(t ) = f (tk1, yk1) + f (tk , yk ) (tk1 tk ) (tk tk1) But we assume that the points are evenly spaced in time so that tk tk1 = h: f f P(t ) = k1 (t tk ) + k (t tk1) h h
yk+1 = yk + = yk +
Z t k+1
tk Z t tk
f (t , y) dt yk +
Z t k+1
tk
P(t ) dt dt
k+1
f f k1 (t tk ) + k (t tk1) h h
Z h
fk (s + h) ds h h f k s2 + hs h 2
0
fk1 = yk h
h2 2
fk + h
h2 + h2 2
yk+1 = yk + h
Adams-Bashforth Methods
The general Adams-Bashforth method may be written:
yk+1 = yk + h
n=0
wk f (tkn, ykn)
The methods are labeled according to the global discretization error: First Order (Euler)
yk+1 = yk + h f (tk , yk ) 3 f (tk , yk ) f (tk1, yk1) yk+1 = yk + h 2 2 23 f (tk , yk ) 16 f (tk1, yk1) 5 f (tk2, yk2) yk+1 = yk + h + 12 12 12
Second Order
Third Order
These methods require uniform steps in time (i.e. tk+1 tk = h for all k) and except for Eulers method require starting values (y0 and y1 for the second order method and y0, y1 and y2 for the third order method). Euler just needs y0.
x = x y = 20y
much more quickly than x.
x(0) = 1 y(0) = 1
The solution to this system is x(t ) = exp(t ), y(t ) = exp(20t ). Note that y changes
With an explicit method (one that only uses past or present information to compute
xn+1 and yn+1), the time step will be restricted by the behavior of y(t ). However,
an implicit method can take relatively larger time steps and still properly compute the behavior of x(t ) and y(t ). Note that if f (t , y) is nonlinear, an implicit method will require the solution of a system of nonlinear equations, leading to the use of Newtons method or a similar algorithm.
Adams-Moulton methods
These implicit multistep methods follow the same approach as the (explicit) AdamsBashforth formulas:
Z t n+1
tn
yn+1 = yn +
f (t , y) dt
However, they include f (tn+1, yn+1) in computing an approximation of the integral. Approximating the integral in this way leads immediately to two possibilities:
yn+1 = yn + h f (tn+1, yn+1) h Trapezoidal Rule O(h2) yn+1 = yn + ( f (tn, yn) + f (tn+1, yn+1)) 2 To nd higher-order formulas, we can interpolate a polynomial through f (tn+1, yn+1), f (tn, yn), f (tn1, yn1), . . . and then integrate that polynomial from tn to tn+1. The
third order Adams-Moulton method is: Third Order
yn+1 = yn + h
y = f (t , y) with y(t = a) = y0
Approximate the derivative dy/dt at t = tn+1 using one of our one-sided, backward difference formulas from the last chapter:
Runge-Kutta Methods
Lets now return to the problem of computing:
y = f (t , y)
y(t + h) = y(t ) +
Z t +h
t
f (t , y) dt
We can compute a more accurate integral by including more values of f (t , y) for t [tn, tn+1] (rather than using values to the left of tn as in multistep methods). Two possible approximations of the integral:
h y(t + h) = y(t ) + [ f (t , y(t )) + f (t + h, y(t + h))] 2 h h Midpoint rule y(t + h) = y(t ) + h f (t + , y(t + )) 2 2 We can approximate y(t + h) and y(t + h 2 ) using Eulers method:
Trapezoidal rule Modied Euler Midpoint Method
h y(t + h) = y(t ) + [ f (t , y) + f (t + h, y + h f (t , y))] 2 h h y(t + h) = y(t ) + h f (t + , y + f (t , y)) 2 2 h 1 k2 = h f (t + , y(t ) + k1) 2 2 y(t + h) = y(t ) + k2
k1 = h f (t , y(t ))
Fourth-Order Runge-Kutta
This is one of the most popular algorithms for solving ordinary differential equations. Solve for four different values of f (t , y):
yn+1 = yn +
This scheme has a global error of O(h4) and a local error of O(h5). We have been able to construct a more accurate approximation of yn+1 by sampling
Example
Lets try to use the fourth-order Runge-Kutta formula to solve the following second order ordinary differential equation.
y + Ay = B cos( t )
To solve this, we convert it to a system of two rst order ODEs:
y v
0 1 A 0
y v
0 B cos( t )
x=
y v
and
f(t , x) =
0 1 A 0
y v
0 B cos( t )
We can now apply the fourth order Runge-Kutta method to this problem.
y = f (t , y, y )
where and are the boundary conditions on y. If this were an initial value problem, two initial conditions y(t = a) and y (t = a) would be required. As long as
y = f (t , y, y )
into the initial value problem:
y = f (t , y, y )
We dont have an initial value for y (a), so we try a number of values for y (a) and integrate each solution y(t ) until we nd one with y(t = b) = . Algorithm
Integrate ODE using 4th order Runge-Kutta with y(a) = and y (a) = C. Evaluate y(t = b) and compare with boundary condition y(b) = . Choose new C and repeat until y(t = b) = . Note: Use bisection algorithm to nd correct value for C.
y(0)
0 0 0 0 0 0 0
y (0) = C
-2 -1 0 -0.5 -0.75 -0.625 -0.5625
y(1)
-1.0126 -0.3141 0.3844 0.0351 -0.1395 -0.0522 -0.0085
0.05
y(t)
0.1
c1 =0.5 c4 =0.5625
0.15
c3 =0.625
0.2
c2 =0.75
0.25 0
0.1
0.2
0.3
0.4
0.5 t
0.6
0.7
0.8
0.9
y = f (t , y, y )
Here, we will discretize the differential equation using nite differences and then solve the differential equation either using Gaussian elimination or iteratively. Lets rst consider the solution when f (t , y, y ) is linear. Then, assume the differential equation becomes:
y (ti) = y (ti) =
This equation will be valid for a < ti < b. However, when t = a or t = b, we have to apply the boundary conditions:
y(t = a) = t0 = a t1 = a + h t2 = a + 2h
. . .
and
y(t = b) =
y0 =
y2 2y1 +y0 h2 y3 2y2 +y1 h2 yN 1 2yN 2 +yN 3 h2 yN 2yN 1 +yN 2 h2 y0 = p(t1) y22 h + q(t1 ) y1 + r(t1 ) y1 = p(t2) y32 h + q(t2 ) y2 + r(t2 )
. . .
tN 2 = b 2h tN 1 = b h tN = b
= p(tN 2) = p(tN 1)
yN =
0 0 0 ... 0 y0 h 2 h2r(t1) 1 h 0 ... 0 y1 2 p(t1) 2 + h q(t1 ) 1 + 2 p(t1 ) 2 h h 2 0 1 2 p(t2) 2 + h q(t2) 1 + 2 p(t2) . . . 0 y2 = h r(t2) . . . . . . . . . .. .. . . . . . yN 0 ... 0 0 0 1 1
This system may be solved using Gaussian elimination or iteratively using Jacobi, Gauss-Seidel or SOR (successive over-relaxation). If p, q and r are continuous on the interval [a, b] and q(x) 0 when a x b, then
2 where L = max the above system has a unique solution if h < L x[a,b] | p(x)|.
Nonlinear Systems
y = f (t , y, y ) t [a, b] y(t = a) = y(t = b) =
When f is nonlinear, the solution becomes more difcult and requires iteration. We can proceed as before to set up the system (now of nonlinear equations):
t0 = a t1 = a + h t2 = a + 2h
. . .
y0 y2 + 2y1 y0 + y3 + 2y2 y1 +
y0 h2 f t1, y1, y22 h y1 h2 f t2, y2, y32 h yN 1 yN 3 2h yN yN 2 2h
= = 0 = 0
. . .
tN 2 = b 2h tN 1 = b h tN = b
yN 1 + 2yN 2 yN 3 + h2 f tN 2, yN 2, yN + 2yN 1 yN 2 +
= 0 = 0 =
h2 f tN 1, yN 1, yN