0% found this document useful (0 votes)
3 views

Numerical Method

Uploaded by

Grace Llobrera
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Numerical Method

Uploaded by

Grace Llobrera
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 20

The Trapezoidal Rule

The first method we shall develop is known as the Trapezoidal Rule. In this method we
approximate f(x) with a collection of line segments and integrate across each of these.

Let P be a partition of [a,b] into n subintervals of equal width,


, where for . On
each subinterval of P we approximate f(x) with a line segment. Here, instead of
approximating f(x) with a horizontal line segment over , we shall approximate
f(x) with the line segment that has the points and as its
endpoints--points that lie on the graph of y = f(x).

Figure 2: Approximating the graph of y = f(x) with line segments across successive
intervals to obtain the Trapezoidal Rule.

Since exactly one line can pass through two distinct points, we see that any line that
interpolates these points must be unique. Thus on we approximate f(x) with the
unique line

Therefore

By evaluating the integral on the right, we obtain


since for each i. Summing the definite integrals over each subinterval
provides us with the approximation

By simplifying this sum we obtain the approximation scheme

This is the Trapezoidal Rule. It is known by this name because on each subinterval
we are approximating the region bounded by the curve y = f(x), the x-axis, and
the lines and , with a region having a trapezoidal shape. We shall denote
to be the sum given on the right side of (3):

Note that the result in (3) is actually an average of the results in (1) and (2).

Example 1 Let us consider how to approximate the value of the definite integral

by applying the Trapezoidal Rule with n = 3. Here we have , with a = -1


and b = 1, so that with

for i = 0, 1, 2, 3. When we apply the Trapezoidal Rule, we obtain

to nine decimal places. Note that the actual value of this integral is 4/3 = 1.333333333.
Figure 3: Approximating the area between the curve and the x-axis on [-
1,1] by using the Trapezoidal Rule with n = 3.

Example 2 Let us apply the Trapezoidal Rule with n = 6 to approximate the value of

In this case our function f(x) = 1/x, and we have with

for . Applying the Trapezoidal Rule then provides

to nine decimal places. Note that the actual value of this integral is .

Example 3 Let us apply the Trapezoidal Rule with n = 9 to approximate the value of

In this case our function , and we have with


for . Rounded to nine decimal places, this yields the values

Applying the Trapezoidal Rule then provides

Example 4 We shall now apply the Trapezoidal Rule to approximate


by approximating the value of the definite integral

In this example we shall illustrate how well the method works by considering the
Trapezoidal Rule for the cases . The following table provides the values
of along with the error of the approximation, , for these values of n.
Example 5 Let us now apply the Trapezoidal Rule to the definite integral

Once again, we shall build a table of values of for the cases .

For each of the previous two examples we computed the values of for
for the purpose of illustrating the rate at which the Trapezoidal Rule converges to the
exact value of the respective definite integral. However when applying an approximation
scheme such as, in this case, the Trapezoidal Rule, the effort to produce successively
better approximations by means of computing for larger and larger values of n is not
very expedient if is computed for consecutive n. This requires that we repeat all
aspects of the computation for each n. Instead, having already computed for some n,
we wish to be able to find the next larger N > n such that the amount of additional work
required to compute is minimized. To utilize the values of f(x) that we found in
computing , this requires that N be a multiple of n, and thus we take N = 2 n to
minimize the necessary additional work. We can compute from according to the
scheme

where

and thus instead of having to evaluate f(x) at each of the 2n + 1 points ,


given by (5), we need only evaluate f(x) at the additional n points .

The question now arises as to how accurately the Trapezoidal Rule approximates the
definite integral of a function f(x) on an interval [a,b]. There is a possibility that the
approximation will be exact, meaning that the amount of error, which is the difference
between the exact value and the approximation, is 0. However, this is not true in general,
and, in fact, for some functions this is impossible on particular intervals, an example of
which will be given later on. We study the accuracy of an approximation by considering
the error that it produces. Note that if we cannot find the exact value of a definite integral
with a finite number of algebraic operations involving elementary functions, then neither
can we find the exact value of the error of such an approximation scheme. However for
some methods of approximation we can determine bounds for the magnitude of the error.
While this does not explicitly give us the value of the error, it can still give us an estimate
of how well we are able to approximate a given definite integral using such a method. For
the Trapezoidal Rule we have the following

Theorem 1.1 Suppose that exists on [a,b]. Then for n a positive integer,

where

and the error is given by

for some point c in [a,b].


Since the number c is not specified in this theorem, we are unable to use this to determine
the exact value of for functions f(x) in general. However, one of the implications here
is that the magnitude of the error has the bounds

Thus if is never 0 on [a,b], then the error must be non-zero.

Example 6 In Example 2 we used the Trapezoidal Rule to find as an


approximation of the value of

For our function f(x) = 1/x, we have , so that

Since is strictly decreasing for all x > 0, we see that

Example 7 In Example 3 we incorporated the Trapezoidal Rule to find


as an approximation of the value of

For our function , we have

Note that

which is positive for . Therefore is increasing on [1,e], so that


Example 8 In Example 4 we approximated the value of by means of the Trapezoidal
Rule applied to the definite integral

Here our function , so that . From this we


can see that does not exist at x = 1. In fact, as . Thus we
cannot use Theorem 1.1 to bound the error . However, this does not imply that we are
unable to effectively approximate the value of by applying the Trapezoidal Rule to
this definite integral. To obtain better and better approximations, we need only increase
the value of n.

Simpson's Rule
Simpson's rule is a Newton-Cotes formula for approximating the integral of a function using quadratic

polynomials (i.e., parabolic arcs instead of the straight line segments used in the trapezoidal rule).

Simpson's rule can be derived by integrating a third-order Lagrange interpolating polynomial fit to the

function at three equally spaced points. In particular, let the function be tabulated at points , , and

equally spaced by distance , and denote . Then Simpson's rule states that

Since it uses quadratic polynomials to approximate functions, Simpson's rule actually gives exact results

when approximating integrals of polynomials up to cubic degree.


For example, consider (black curve) on the interval , so that ,

, and . Then Simpson's rule (which corresponds to the area under

the blue curve obtained from the third-order interpolating polynomial) gives

whereas the trapezoidal rule (area under the red curve) gives and the actual answer is 1.

In exact form,

where the remainder term can be written as

with being some value of in the interval .

An extended version of the rule can be written for tabulated at , , ..., as


(

where the remainder term is

(1

0)

for some .

Romberg's method
From Wikipedia, the free encyclopedia

Jump to: navigation, search

In numerical analysis, Romberg's method (Romberg 1955) generates a triangular array


consisting of numerical estimates of the definite integral

by using Richardson extrapolation (Richardson 1910) repeatedly on the trapezium rule.


Romberg's method evaluates the integrand at equally-spaced points. The integrand must
have continuous derivatives, though fairly good results may be obtained if only a few
derivatives exist. If it is possible to evaluate the integrand at unequally-spaced points,
then other methods such as Gaussian quadrature and Clenshaw-Curtis quadrature are
generally more accurate.

Method
The method can be defined inductively in this way:

or

where

In big O notation, the error for R(n,m) is:


The first extrapolation, R(n,1), is equivalent to Simpson's rule with n + 2 points.

When function evaluations are expensive, it may be preferable to replace the polynomial
interpolation of Richardson with the rational interpolation proposed by Bulirsch & Stoer
(1967).

Gauss - Legendre Quadrature Method


b n
I f (x) dx   w i f (x)
a i 0

n + 1 values of wi
There are 2 n + 2 parameters which define a
=>
n + 1 values of f (xi) polynomial of degree 2 n + 1

n
Choose xi’s such that, the sum  w i f (x i ) gives the integral
i 0

f (x) dx exactly when f (x) is a polynomial of degree 2 n + 1.


a

Replace f(x) by a polynomial of degree n

b b b
I f (x) dx  Pn (x) dx  R (x) dx
a a a

where the last term is the error term.

We will try to make

R (x) dx 0
a

when f (x) is a polynomial of degree 2 n + 1.

Write Pn (x) in Lagrange form:


n
Pn (x)   L i (x) f (x i )
i 0
(x - x1 ) (x - x 2 ) ... (x - x n ) (x - x 0 ) (x - x 2 ) ... (x - x n )
Pn ( x)  f (x 0 )  f (x1 )  .....
(x 0 - x1 ) (x 0 - x 2 ) ... (x 0 - x n ) (x1 - x 0 ) (x1 - x 2 ) ... (x1 - x n )

Substitute

b b n b
I f (x) dx   L i (x) f (x i ) dx  R (x) dx
a a i 0 a
n b  b
  L i (x) dx  f (x i )  R (x) dx
i 0  a  a

The error term in Lagrange form:

n
f (n 1) ()
R (x) (x - x i ) , a  b
i 0 (n  1) !

Change coordinates from x in [a , b] to z in [-1 , 1] where

2 x - (a  b)
z
b-a
 (b - a) z  b  a 
F (z) f (x) f  
 2 
n n
F (n 1) ()
f (x) F (z)   L i (z) F (z i )  (z - z i ) , - 1   1
i 0 i 0 (n  1) !

where

n z - zj
L (z) 
j 0 z i - zj
j i

b 1 n 1  1
I f (x) dx  F (z) dz    L i (z) dz  F (z i )  R (z) dz
a -1 i 0  - 1  -1

n
F (n 1) ()
R (z) (z - z i ) , - 1    1
i 0 (n  1) !
If f(x) or F (z) is a polynomial of degree 2 n + 1, then F (n+1)(x ) is a polynomial of
degree n, at most.

Let

F (n 1) ( z ) 
g n (z)
(n  1) !

1 n 1  1 n
I  F (z) dz    L i (z) dz  F (z i )  (z - z i ) g n (z) dz
-1 i 0  - 1  - 1i 0

Question:

1 n

(z - z i ) g n (z) dz 0 ??
- 1i 0

when F (z) is a polynomial of degree 2 n + 1.

The answer is to choose zi’s such that this is true, using a complete set of orthogonal
functions.

Choose Legendre Polynomials as the orthogonal set of functions. Expand in terms of


Legendre polynomials:

n n 1
(z - z i ) b 0 P0 (z)  b1 P1 (z)  ...  b n 1 Pn 1 (z)   b i Pi (z)
i 0 i 0

n
g n (z) c 0 P0 (z)  c1 P1 (z)  ...  c n Pn (z)   c i Pi (z)
i 0

1 1 n n n 1 n 1

R (z) dz  (z - z i ) g n (z) dz    b i c j Pi Pj dz  b n 1  ci Pi Pn 1 dz


-1 - 1i 0 i 0 j 0 -1 i 0 -1

Since

Pi (z) Pj (z) dz 0 for i  j


-1
1 n 1

R (z) dz   b i c i Pi (z)


2
dz
1 i 0 -1

Pi (z)
2
Since ci ¹ 0 and dz 0
1

1
If R (z) dz 0 the only way is to choose bi = 0 for i = 0, 1, ..., n.
-1

Then,

n n 1
(z - z i ) b i Pi (z) b n 1 Pn 1 (z)
i 0 i 0

So, choose zi’s such that Pn+1(z)=0. i.e. zi’s are the roots of the Legendre Polynomial
Pn+1(z).

Legendre Polynomials: P 0 1

P 1 1

1
P 2  (3 z 2 - 1)
2

1
P 3  (5 z 3 - 3 z)
2

Euler's Method
We have seen how to use a direction field to obtain qualitative information about the
solutions to a differential equation. This simple kind of reasoning lead to predictions for
the eventual behaviour of solutions to the logistic equation.

Sometimes, however, we want more detailed information. For instance, we might want to
know how long it will take before the solution is near the limiting value. In this case, we
can use the linear approximation to numerically approximate solutions to differential
equations. We will demonstrate this approach through an example.
A Simple Initial Value Problem

Let's start by looking at an initial value problem whose solution is known:

We know that the solution is . This means that after we find our approximate
solution, we will be able to determine how good of an approximation it really is.

Let's suppose that we are interested in the value of the solution at . We know the
value at since that is a part of the initial value problem---namely, . Notice
that the differential equation also tells us the derivative of the solution at since

If we now form the linear approximation at , we find that


. Then our approximation yields

This approximation is not too good but it was easy to obtain. Graphically, the picture is
like this:

The problem with the approximation is that the derivative of the solution is changing
across the interval but the approximation assumes that it is constantly 1 . We can try
to fix this up by diving the interval into two pieces: First, we will use the linear
approximation based at to approximate the value at . Then we will use a
linear approximation at to obtain an approximate value at .

We have already obtained the linear approximation based at . This


produces the approximate value . This tells us that the solution curve
approximately passes through . That means that
We will then form the linear approximation at the point : it produces

which yields the approximation . This is, in fact, a better


approximation to the value . Graphically, what we have done is
illustrated in the diagram.

Here you can see why we have a better approximation: the derivative of the solution
changes as we move across the interval . In the second approximation, we take this
into account by stopping at , recomputing the derivative and then continuing on.

Now you can probably imagine that we will get better approximations if we take shorter
steps and correct the slope at every step. To do this, imagine walking from 0 to 1 by
taking n steps, each of width . We will call the points we obtain .

Notice that since this is where the initial value problem tells us to begin.
To get from one step to the next, we are assuming that the solution approximately passes
through . At that point, the derivative, which is equal to the y coordinate by the
differential equation, is . That means that the linear approximation at that point is

This means that at , we have

The following demonstration will let you select the number of steps and show you the
approximate solution (type in the number of steps and press "Return"). Notice that as the
number of steps gets larger, the approximation becomes very good.
Euler's Method

Now we will work with a general initial value problem

We will again form an approximate solution by taking lots of little steps. We will call the
distance between the steps h and the various points . To get from one step to the
next, we will form the linear approximation at . The derivative at this point is given by
the differential equation: . The linear approximation is then

so that

This technique is called Euler's Method.

The logistic equation

Now we will consider the initial value problem

Notice that this has the basic form of the logistic equation. We have studied this equation
qualitatively, but we do not explicitly know solutions. As an example, we will
approximate the solution on the interval by taking steps of width h.

Applying Euler's Method, we can generate an approximate solution by

In the demonstration below, you can enter the number of steps and see the approximate
solution. Again, as you take more steps, the solution does not vary too much when you
increase the number of steps. You can then feel confident that your solution is a good
approximation.

The Runge-Kutta Method

Figure 3.7: The Euler method. The


derivative at the starting point of each time interval
(denoted by the arrows) is extrapolated to find the next function value. The dashed curve
is the numerical approximation to the exact solution x(t) showing the position of a pore-

interface as a function of time. In general .

The new positions of the pore-interfaces are calculated by using a second order Runge-
Kutta scheme [26]. This method is an improvement of Euler's method, the simplest and
least accurate method for updating the positions. Assume that the position x(t) of a given
pore-interface behaves as in figure 3.7. Let the time axis be divided into intervals of equal
lengths, dt, and let denote the numerical approximation to . The formula for the
Euler method is [26]:

where is the flow velocity in the tube at time corresponding to the position
. is actually the derivative of the numerical solution at the starting point of
each time interval. The derivative is simply extrapolated to find the next function value as

shown in figure 3.7. The method has only first-order accuracy, i.e. is to be added
to 3.9. Moreover, the Euler method is not very stable.

The second order Runge-Kutta is an extension of the first order Euler scheme. The
improvement is obtained by using the initial derivative at each step to find a point
halfway across the interval, then using the midpoint derivative across the full width of the
interval. The formula for the second order Runge-Kutta becomes [26]:

where is the midpoint velocity. The algorithm


is illustrated in figure 3.8.

Figure 3.8: The second order Runge-Kutta method requires two evaluations of the
derivatives per time step. In the figure, the filled dots represent final function values with
their derivatives , while open dots represent function values that are discarded
once their derivatives have been calculated and used. The examples
described in figure 3.7 and in the figure above clearly show that the second order Runge-
Kutta method is more accurate than the Euler's method.

A problem arises when using the second order Runge-Kutta method with a time step
depending on the velocity in the tubes. The velocities defining the time step in
equation 3.8, correspond to the derivative of the curve at the starting point of each time
interval. I.e. , where the subscript ij is omitted on the right hand side of
the equality. If the next position is found by using the Euler method the
displacement length for all pore-interfaces will alway be bounded by
:
In the second order Runge-Kutta method the midpoint velocity is found by using the time
step defined in equation 3.8. Figure 3.8 shows that the midpoint velocity in general is not
equal to the velocity at the starting point. Now, the next position is found by using the
midpoint velocity and condition 3.10 may no longer be valid. Assume that the function
x(t) is smooth, this effect will more or less vanish since .
Moreover, there is no problem as long as . The problem arises
when and the final displacement becomes much larger than
. In the worst case the numerical solution will diverge. To ensure that 3.10 is
fulfilled the time step is redefined by inserting the midpoint velocities in equation 3.8. A
new midpoint velocity corresponding to the redefined time step is calculated and the
positions are updated once more by using this midpoint velocity. The procedure is
repeated until the displacements become close enough, typically within of the
maximum step length .

An even more accurate scheme is the fourth-order Runge-Kutta method. The fourth-order
Runge-Kutta method requires four evaluations of the derivative per time step. This will
be superior to the second-order Runge-Kutta if at least twice as large time-steps were
possible. For our problem this is not the case, and the gain in accuracy is lost in
increasing computation time. Hence, we have concluded that a second order Runge-Kutta
scheme with an adaptive step size in the middle of the tube is sufficient to ensure the
stability of the numerical solution.

You might also like