Lecture Aid 2012
Lecture Aid 2012
(x) dx,
for which the formula is
_
u dv = uv
_
v du. (2.2)
Let u = f
(x) dx
v = x
and substitute into equation (2.2) to obtain
_
b
a
f
(x) dx = xf
(x)|
b
a
_
b
a
xf
(x) dx
= bf
(b) af
(a)
_
b
a
xf
(x) dx.
We can now reapply the fundamental theorem of calculus to the term f
(b)
in the form
f
(b) = f
(a) +
_
b
a
f
(x) dx.
to obtain
_
b
a
f
(x) dx = b
_
f
(a) +
_
b
a
f
(x) dx
_
af
(a)
_
b
a
xf
(x) dx.
= (b a)f
(a) +
_
b
a
(b x)f
(x) dx
and substitution of this back into equation (2.1) yields
f(b) = f(a) + (b a)f
(a) +
_
b
a
(b x)f
(x) dx
Repetition of this process yields Taylors theorem as it is shown in the text-
book.
1
Taylor series examples
Students should be somewhat familiar with the methodology of obtaining Tay-
lor series expansions of functions. We list some examples which illustrate this
methodology.
Example 1. Find the Taylor series expansion of ln(1 x) centered around
x
0
= 0.
Solution: If we let f(x) = ln(1 x), then we obtain the following derivatives
and their values at x
0
= 0:
f(x) = ln(1 x) f(0) = 0
f
(x) = (1 x)
1
f
(0) = 1
f
(x) = (1 x)
2
f
(0) = 1
f
(x) = 2(1 x)
3
f
(0) = 2
f
(4)
(x) = 2.3(1 x)
4
f
(4)
(0) = 2.3
.
.
.
.
.
.
f
(n)
(x) = (n 1)!(1 x)
n
f
(n)
(0) = (n 1)!
We thus obtain
f(x) =
f(0)
0!
(x 0)
0
+
f
(0)
1!
(x 0)
1
+
f
(0)
2!
(x 0)
2
+
f
(0)
3!
(x 0)
3
+. . .
=
n=0
f
(n)
(0)
n!
(x 0)
n
=
n=0
(n 1)!
n!
x
n
=
n=0
x
n
n
For the representation to be valid (or for the series to converge), we require
lim
n
R
n
= 0 and thus we investigate the generalised terms a
n
of the innite
sum, where
a
n
=
x
n
n
It follows that
a
n
a
n1
=
x
n
n
_
n 1
x
n1
_
=
n 1
n
1
x
1
=
_
n 1
n
_
x
for which
a
n
a
n1
|x| as n .
2
Thus, we require |x| < 1 for the innite series to converge. And only if we have
this convergence, are we actually allowed to represent the function as an innite
series.
Example 2. Given the function
g(x) = sinx,
complete the following instructions:
(a) Find the Taylor series expansion about x
0
= 0,
(b) Find the radius and interval of convergence, and
Solution: (a) Given g(x) and x
0
= 0, we obtain the following
g(x) = sin x g(0) = 0
g
(x) = cos x g
(0) = 1
g
(x) = sinx g
(0) = 0
g
(x) = cos x g
(0) = 1
g
(4)
(x) = sin x g
(4)
(0) = 0
.
.
.
.
.
.
We see that
g
(n)
(x)
=
_
sin x for n even
cos x for n odd
and
g
(n)
(0)
=
_
0 for n even
1 for n odd
Using this information and the generic formula
g(x) =
n=0
g
(n)
(0)
n!
(x x
0
)
n
we obtain
g(x) =
n=0
x
n
n!
g
(n)
(0)
= x
x
3
3!
+
x
5
5!
. . .
=
n=0
(1)
n
x
2n+1
(2n + 1)!
(b) It follows that
a
n
= (1)
n
x
2n+1
(2n + 1)!
and therefore
a
n
a
n1
= (1)
n
x
2n+1
(2n + 1)!
_
(1)
n1
x
2(n1)+1
(2(n 1) + 1)!
=
x
2n+1
x
(2n1)
_
(2n 1)!
(2n + 1)!
_
= x
2
_
2.3 . . . (2n 1)
2.3 . . . (2n 1)(2n)(2n + 1)
_
=
x
2
2n(2n + 1)
3
Of course, convergence requires that lim
n
an
an1
x
2
2n(2n + 1)
0, for all x.
It follows that the radius of convergence is innity, i.e. R
c
= , and the interval
of convergence is the entire real line, i.e. I
c
= R.
Example 3. Find the radius and interval of convergence of the series expansion
of ln(x) about x
0
= 1.
Solution: It is up to the student to verify that the series expansion of ln(x)
is given by
ln(x) =
n=0
(1)
n+1
(x 1)
n
n
.
For convergence, we use the limit test and hence we let
a
n
=
(1)
n+1
(x 1)
n
n
so that we may calculate |a
n
/a
n1
|, i.e.
a
n
a
n1
=
(1)
n+1
(x 1)
n
n
_
(1)
(n1)+1
(x 1)
n1
n 1
=
_
(1)
n+1
(1)
n
__
(x 1)
n
(x 1)
n1
__
n 1
n
_
= (x 1)
_
n 1
n
_
.
Now of course, it follows that
lim
n
a
n
a
n1
= lim
n
(x 1)
_
n 1
n
_
= |x 1| lim
n
n 1
n
= |x 1| lim
n
1
1
n
= |x 1|
We require that |x 1| < 1 for convergence, or that 0 < x < 2. Hence, the
radius of convergence is R
c
= 1 and the interval of convergence is I
c
= (0, 2).
Note: What about convergence at the endpoints of the interval?
Example 4. The following problem is number 11 on page 15 of Numerical
Analysis by RL Burden & JD Faires (9th edition).
11. Let f(x) = 2xcos(2x) (x 2)
2
and x
0
= 0.
(a) Find the third Taylor polynomial P
3
(x), and use it to approxi-
mate f(0.4).
4
(b) Use the error formula in Taylors Theorem to nd an upper
bound for the error |f(0.4) P
3
(0.4)|. Compute the actual
error.
(c) Find the fourth Taylor polynomial P
4
(x), and use it to approx-
imate f(0.4).
(d) Use the error formula in Taylors Theorem to nd an upper
bound for the error |f(0.4) P
4
(0.4)|. Compute the actual
error.
Note that the error term they refer to is the Lagrange estimate of the error term
as by your notes.
Solution: (a) The derivatives and their values at x
0
= 0 is as follows:
f(x) = 2xcos(2x) (x 2)
2
f(0) = 4
f
(0) = 6
f
(0) = 2
f
(0) = 24
We thus obtain
P
3
(x) = f(0) +xf
(0) +
x
2
2!
f
(0) +
x
3
3!
f
(0)
= 4 + 6x x
2
4x
3
and it follows that P
3
(0.4) = 2.016.
(b) The Lagrange estimate of the error is given by the formula
R
n
=
(x x
0
)
n+1
(n + 1)!
f
(n+1)
(
x
),
where x
0
<
x
< x.
For this particular problem, we have n = 3, x
0
= 0 and x = 0.4. Thus we
obtain
R
3
=
x
4
4!
f
(4)
(
x
),
with 0 <
x
< 0.4. The reason we havent substituted x = 0.4 into the equation
for the estimate of the error shall become clear when we try to nd a bound on
the absolute value of R
3
. Before we get there, we calculate the fourth derivative
of f(x), i.e.
f
(4)
(x) = 64 sin(2x) + 32xcos(2x),
and it follows that
|R
3
| =
x
4
4!
[64 sin(2x) + 32xcos(2x)]
x
4
[64 sin(2x) + 32xcos(2x)]
24
4x
4
[2 sin(2x) +xcos(2x)]
3
5
We apply the identity |a +b| |a| +|b| to the last equation to obtain
|R
3
|
8x
4
3
sin(2x)
4x
5
3
cos(2x)
Of course |sin(2x)| 1 and |cos(2x)| 1 for all x and we can write the last
equation then as
|R
3
|
8x
4
3
sin(2x)
4x
5
3
cos(2x)
8x
4
3
4x
5
3
and thus
|R
3
|
8x
4
3
4x
5
3
3 which is an
upper bound on the error. However, the book from which the problem is taken
states this upper bound as |R
3
| 0.05849. Why the dierence? We investigate
the graph of R
3
(x) =
x
4
24
[64 sin(2x) + 32xcos(2x)] on the interval [0, 0.4] given
by gure 1.
0
0.05
0.1
0.15
0.2
0.25
-0.1 0 0.1 0.2 0.3 0.4 0.5
y
x
abs((x**4/24)*(64*sin(2*x) + 32*x*cos(2*x)))
abs(8*(x**4/3)) + abs(4*(x**5/3))
Figure 1: Graph of |R
3
(x)| = |
x
4
24
[64 sin(2x) + 32xcos(2x)] | versus |R
3
| =
8x
4
3
4x
5
3
.
At x = 0.4 we see from the gure that for the actual error term is smaller
than estimate we obtained, even though we used an algebraically sound deriva-
tion to estimate the error. The conclusion should then be that we do obtain
6
some upper bound estimate of the error, even though it might not be the small-
est estimate for an upper bound. The reason behind this is the
x
term in
the remainder term. Theoretically, we know that it lies in the interval (0, 0.4),
however we dont ever determine its actual value in this interval and hence the
estimate. If we were to determine the actual value of
x
in the interval, it would
be the same as approximating the actual function via the Taylor polynomial
exactly, i.e. the Taylor polynomial and Taylor series agree everywhere on the
interval.
(c) We have seen that
f
(4)
(x) = 64 sin(2x) + 32xcos(2x)
and thus f
(4)
(0) = 0. Thus, the fourth Taylor polynomial is the exact same as
the third Taylor polynomial, i.e.
P
4
(x) = 4 + 6x x
2
4x
3
.
(d) Now we have n = 4, with x
0
= 0 and x = 0.4 as previously. Thus we obtain
|R
4
| =
x
5
5!
f
(5)
(
x
)
x
5
120
(160 cos(2
x
) 64xsin(2
x
))
,
with 0 <
x
< 0.4. Weve chosen to dierentiate between
x
and
x
to avoid
confusion between the dierent approximations and their error estimates. The
smallest upper bound on the error estimate here is found to be
|R
4
| 0.00795,
which is a lot less than the actual error in the approximation (see part (b) of this
question). What is the meaning of this in the context of the approximation?
There is a dierence between the Taylor series expansion of a function about
a point and the nth Taylor polynomial used as an approximation to a function
at a point. Figure 2 shows the graphical representation of sin x and the sev-
enth Taylor polynomial approximation of sin x. The series expansion was done
around x = 0 and from a previous example we found that the interval of con-
vergence was the entire real line. Why the dierence then? The interval of
convergence is applicable only to the Taylor series expansion, i.e. innite sum,
whereas the seventh Taylor polynomial is found after truncating this innite
series expansion at certain n. And as been stated in the lectures, this act of
truncation introduces an error, for which we can determine an upper bound
using the Lagrange estimate of the error.
7
-15
-10
-5
0
5
10
15
-6 -4 -2 0 2 4 6
y
x
sin(x)
x - x**3/6 + x**5/120 - x**7/5040
Figure 2: Dierence between sinx and the seventh Taylor polynomial approxi-
mation to it.
3 Nonlinear equations
Bisection method
Consider the function
f(x) =
x cos x, (3.1)
which well use to illustrate the dierent methods covered in the chapter on
nonlinear equations. A small C++ program was written to do calculations as
an example of practical implementation of numerical method algorithms.
From the outset we have no idea where the root(s) of this function may be.
Substituting x = 0 into (3.1) yields the value f(0) = 1 and susbstituting x = 1
into (3.1) yields f(1) = 1 cos(1) > 0 and we can conclude that the root lies in
the interval [0, 1]. From gure 3 we can see that the root in fact lies somewhere
close to x 0.6.
Since the function changes sign on the interval [0, 1] and we have deduced
that a root is present, we may also choose this interval as the initial interval for
the bisection method. Using this method, we obtain the root to be situated at
x
0
0.6417141 if we choose our tolerance to be = 10
5
(with this tolerance
we get accuracy up to the fth digit after the decimal); the results are listed
in table 1. These results were calculated by the following algorithm used in a
C++ program:
8
-1
-0.5
0
0.5
1
1.5
0 0.2 0.4 0.6 0.8 1 1.2 1.4
y
x
Figure 3: f(x) =
x cos x
do {
x3=(x1+x2)/2;
if( fabs(f(x3)) < eps ) {
done=true;
} else {
done=false;
if( f(x3)*f(x1) < 0 ) {
x2=x3;
} else if( f(x3)*f(x2) < 0 ) {
x1=x3;
} else {
cout << "Some error occurred!" << endl;
done=true;
}
}
i++;
} while( !done && i < i_max );
Linear interpolation
Using the same function and tolerance, we now attempt to nd the root using
linear interpolation. For the initial values we use x
1
= 0 and x
2
= 1 (which is
the x-values for the interval used in the bisection method) and the corresponding
y-values.
9
The results are listed in table 2 and we give the corresponding C++ code
used in calculation. We see that this method obtains the root a lot quicker for
the same tolerance level.
do {
x3=(x1*y2-x2*y1)/(y2-y1);
y3=f(x3);
if( fabs(y3) < eps ) {
done=true;
} else {
done=false;
x1=x2; y1=y2;
x2=x3; y2=y3;
}
i++;
} while( !done && i < i_max );
Newtons method
To implement Newtons method we have to calculate f
(x) =
1
2
x
sinx.
Again, convergence to the root is quick and we obtain an answer within a few
steps that is within the required accuracy. The results are listed in table 3 and
were calculated using the following code in a C++ program:
do {
x0 = x1;
x1 = x0 - delta(f,df,x0);
fabs(x1 - x0) > eps ? done=false : done=true;
i++;
} while( !done && i < i_max );
Figure 4 and gure 5 illustrate the dierence in error between the dierent
methods.
10
i x
i
x
i+1
x
i+2
|f(x
i+2
)|
1 0.5 1 0.75 0.1343365349
2 0.5 0.75 0.625 0.02039370446
3 0.625 0.75 0.6875 0.05632125144
4 0.625 0.6875 0.65625 0.01780672762
5 0.625 0.65625 0.640625 0.001331824419
6 0.640625 0.65625 0.6484375 0.008227740279
7 0.640625 0.6484375 0.64453125 0.003445545258
8 0.640625 0.64453125 0.642578125 0.001056259211
9 0.640625 0.642578125 0.6416015625 0.000137932657
10 0.6416015625 0.642578125 0.6420898438 0.0004591257325
11 0.6416015625 0.6420898438 0.6418457031 0.0001605871555
12 0.6416015625 0.6418457031 0.6417236328 1.132490416e-05
13 0.6416015625 0.6417236328 0.6416625977 6.330446264e-05
14 0.6416625977 0.6417236328 0.6416931152 2.59899258e-05
15 0.6416931152 0.6417236328 0.641708374 7.332547463e-06
Table 1: Bisection method results
i x
i
y
i
x
i+1
y
i+1
x
i+2
y
i+2
= |f(x
i+2
)|
1 0 -1 1 0.4596977 0.6850734 0.05331895
2 1 0.4596977 0.6850734 0.05331895 0.6437534 0.002493829
3 0.6850734 0.05331895 0.6437534 0.002493829 0.6417259 1.415225e-05
4 0.6437534 0.002493829 0.6417259 1.415225e-05 0.6417144 3.717339e-09
Table 2: Linear interpolation results
i x
i
f(x
i
) f
(x
i
) =
f(xi)
f
(xi)
x
i+1
|f(x
i+1
)|
1 1.25 0.80271163 1.3961982 -0.5749267 0.6750733 0.040967305
2 0.6750733 0.040967305 1.2335021 -0.033212189 0.64186112 0.00017943314
3 0.64186112 0.00017943314 1.2227804 -0.00014674191 0.64171437 3.3892973e-09
Table 3: Newtons method results
11
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0 1 2 3 4 5
y
x
./bisection-error.dat
./linear-interpolation-error.dat
./newton-error.dat
Figure 4: Comparison of the absolute value of the error between dierent meth-
ods.
12
-20
-18
-16
-14
-12
-10
-8
-6
-4
-2
0 2 4 6 8 10 12 14
y
x
./bisection-display-error.dat
./linear-interpolation-display-error.dat
./newton-display-error.dat
Figure 5: Comparison of the natural logarithm of the absolute value of the error
between dierent methods.
13
4 Systems of linear equations
To better understand the Jacobi method, we take another look at the example
from the notes (on p. 24). It was given that
A =
_
_
5 3 1
2 3 0
6 1 8
_
_
from which we obtain the following matrices
D =
_
_
5 0 0
0 3 0
0 0 8
_
_
, L =
_
_
0 0 0
2 0 0
6 1 0
_
_
, and U =
_
_
0 3 1
0 0 0
0 0 0
_
_
.
The Jacobi method is implemented via the vectorial equation
x
(k+1)
= D
1
_
b (L +U)x
(k)
_
, k = 1, 2, 3, . . .
From the information we have at our disposal, it follows that
D =
_
_
1
5
0 0
0
1
3
0
0 0
1
8
_
_
_
_
0.2 0 0
0 0.33 0
0 0 0.125
_
_
and
L +U =
_
_
0 3 1
2 0 0
6 1 0
_
_
.
The rst step (k = 1) of the Jacobi method for this example is then
x
(2)
= D
1
_
b (L +U)x
(1)
_
=
_
_
0.2 0 0
0 0.33 0
0 0 0.125
_
_
_
_
_
_
3
0
7
_
_
_
_
0 3 1
2 0 0
6 1 0
_
_
_
_
2
2
2
_
_
_
_
=
_
_
0.2 0 0
0 0.333 0
0 0 0.125
_
_
_
_
_
_
3
0
7
_
_
_
_
8
4
10
_
_
_
_
=
_
_
0.2 0 0
0 0.333 0
0 0 0.125
_
_
_
_
5
4
17
_
_
=
_
_
1.000
1.332
2.125
_
_
14
Next we calculate the residual term for this step
r
(2)
=
_
_
5 3 1
2 3 0
6 1 8
_
_
_
_
1.000
1.332
2.125
_
_
_
_
3
0
7
_
_
=
_
_
3.129
5.596
23.132
_
_
_
_
3
0
7
_
_
=
_
_
6.129
5.596
16.132
_
_
and its magnitude is calculated to be
r
(2)
=
_
(6.129)
2
+ (5.596)
2
+ (16.132)
2
18.1417.
This is of course much larger than any tolerance which we would choose to
impose. So we repeat the process. Let k = 2 and then
x
(3)
= D
1
_
b (L +U)x
(2)
_
=
_
_
0.2 0 0
0 0.333 0
0 0 0.125
_
_
_
_
_
_
3
0
7
_
_
_
_
0 3 1
2 0 0
6 1 0
_
_
_
_
1.000
1.332
2.125
_
_
_
_
=
_
_
0.2 0 0
0 0.333 0
0 0 0.125
_
_
_
_
_
_
3
0
7
_
_
_
_
1.871
2.000
7.332
_
_
_
_
=
_
_
0.2 0 0
0 0.333 0
0 0 0.125
_
_
_
_
1.129
2.000
0.332
_
_
=
_
_
0.2258
0.6660
0.0415
_
_
The residual term for this step is calculated to be
r
(3)
=
_
_
5 3 1
2 3 0
6 1 8
_
_
_
_
0.2258
0.6660
0.0415
_
_
_
_
3
0
7
_
_
=
_
_
0.8275
2.4496
2.3528
_
_
_
_
3
0
7
_
_
=
_
_
3.8275
2.4496
4.6472
_
_
and its magnitude is calculated to be
r
(3)
=
_
(3.8275)
2
+ (2.4496)
2
+ (4.6472)
2
6.49975.
15
We keep on repeating the process until the magnitude of the residual term is
less than our imposed tolerance.
Example 5. Find the rst two iterations of the Jacobi method for the following
linear systems, using x
(0)
= 0
4x
1
+ x
2
x
3
= 5
x
1
+ 3x
2
+ x
3
= 4
2x
1
+ 2x
2
+ 5x
3
= 1
This problem was taken from [1], pg. 459, problem (1.a).
Solution: We identify
A =
_
_
4 1 1
1 3 1
2 2 5
_
_
=
D
..
_
_
4 0 0
0 3 0
0 0 5
_
_
+
L+U
..
_
_
0 1 1
1 0 1
2 2 0
_
_
and
b =
_
_
5
4
1
_
_
.
The rst step (k = 0) of the Jacobi method is calculated as follows
x
(1)
= D
1
_
b (L +U)x
(0)
_
=
_
_
0.25 0 0
0 0.333 0
0 0 0.5
_
_
_
_
_
_
5
4
1
_
_
_
_
0 1 1
1 0 1
2 2 0
_
_
_
_
0
0
0
_
_
_
_
=
_
_
0.2 0 0
0 0.333 0
0 0 0.125
_
_
_
_
5
4
1
_
_
=
_
_
1.250
1.332
0.500
_
_
The residual term and its magnitude is calculated next.
r
(1)
=
_
_
4 1 1
1 3 1
2 2 5
_
_
_
_
1.250
1.332
0.500
_
_
_
_
5
4
1
_
_
=
_
_
0.832
7.246
6.664
_
_
and
r
(1)
9.87956.
16
Now we set k = 1 and repeat the process
x
(1)
= D
1
_
b (L +U)x
(0)
_
=
_
_
0.25 0 0
0 0.333 0
0 0 0.5
_
_
_
_
_
_
5
4
1
_
_
_
_
0 1 1
1 0 1
2 2 0
_
_
_
_
1.25
1.332
0.5
_
_
_
_
=
_
_
0.2 0 0
0 0.333 0
0 0 0.125
_
_
_
_
4.168
3.25
4.164
_
_
=
_
_
1.04200
1.08225
2.08200
_
_
The residual term and its magnitude for this step is given by
r
(1)
=
_
_
4 1 1
1 3 1
2 2 5
_
_
_
_
1.04200
1.08225
2.08200
_
_
_
_
5
4
1
_
_
=
_
_
0.16775
2.37075
11.4905
_
_
and
r
(2)
11.73372.
5 Approximation methods
Polynomial interpolation
Example 6. Find the interpolating polynomial if we want to interpolate y(x) =
sin x at the points
_
2
, 0,
2
_
. Also nd a generalised expression for the upper
bound on the error.
Solution: To nd the polynomial we rst identify
x
0
=
2
, x
1
= 0, x
2
=
2
and thus n = 2. We have to solve the following system of linear equations
a
0
2
a
1
+
2
4
a
2
= 1
a
0
+ (0)a
1
+ (0)a
2
= 0
a
0
+
2
a
1
+
2
4
a
2
= 1
Immediately we see that a
0
= 0 and thus we are left with
2
a
1
+
2
4
a
2
= 1
2
a
1
+
2
4
a
2
= 1
17
from which we obtain the solutions
a
2
= 0 and a
1
=
2
.
We nd the interpolating polynomial to be
p
2
(x) = a
0
+a
1
x +a
2
x
2
=
2
x
Figure 6 illustrates the interpolating polynomial compared to the actual func-
tion. An expression for the error is obtained from equation (5.4) in the notes as
-1.5
-1
-0.5
0
0.5
1
1.5
-1.5 -1 -0.5 0 0.5 1 1.5
y
x
sin(x)
(2/pi)*x
Figure 6: Graphs of y(x) = sin x and p
2
(x) =
2
x
.
follows
y(x) p
2
(x) =
y
3
((x))
3!
3
i=1
(x x
i
)
=
cos((x))
6
_
x +
2
_
(x 0)
_
x
2
_
We dont know what the value of (x) is, we only know that x
0
< (x) < x
2
. To
circumvent this, we calculate a generalised upper bound on the error by taking
a maximum on the derivative because
max
[
2
,
2
]
|cos x| = 1
18
and
|cos((x))| max
[
2
,
2
]
|cos x| for
2
< (x) <
2
.
Therefore, it follows that
|y(x) p
2
(x)|
1
6
_
x +
2
_
(x 0)
_
x
2
_
=
1
6
x
3
2
4
x
.
Let us now compare approximating sin(1) and sin(2). For x = 1 we obtain
p
2
(1) =
2
(1) =
2
(1)
3
2
4
(1)
0.24457.
But for x = 2 we nd a dierent scenario. The approximation is p
2
(2) =
2
(2)
3
2
4
(2)
38.75775.
The major conclusion we have to take away from this example is that the
interpolating polynomial is accurate on the interval with endpoints correspond-
ing to our rst and last nodes, i.e. (x
0
, x
n
), and any interpolation done outside
this interval is prone to large error.
Another conclusion is the degree of the interpolating polynomial. Although
the notes state that we should expect a quadratic polynomial for this example,
we obtained a linear polynomial. From gure 6 and the error analysis we see
that this linear polynomial is sucient. It is only linear because of our specic
choice of nodes. If we were to choose dierent nodes we would obtain a dierent
interpolating polynomial as will be seen in the following example.
Exercise: Find the interpolating polynomial for sin x at the nodes { 0,
4
,
2
}.
Find a generalised expression for the upper bound on the error and use the
interpolating polynomial to approximate x =
6
. Compare this to the example
done in the notes under Lagranges method.
19
Lagranges method
Example 7. Given the points x
0
= 0, x
1
= 0.6 and x
2
= 0.9, construct a
interpolation polynomial of degree at most one and degree at most two to ap-
proximate y(0.45) for y(x) =
i=0
y
i
L
i
(x)
= y
0
L
0
(x) +y
1
L
1
(x)
= (1)
_
x
0.6
+ 1
_
+ (1.265)
_
x
0.6
_
= 1 + 0.442x
Approximating f(0.45) we nd P
1
(0.45) 1.1989 with absolute approximation
error |f(0.45) P
1
(0.45)| 0.005. The error bound is found using formula (5.4)
in the notes with
y
(2)
() =
1
4 (1 +)
3
2
.
We choose such that y
(2)
(x) is a maximum on the interval [0, 0.6]. We see that
y
(2)
(0) = 0.25 and y
(2)
(0.6) 0.124. Analysis or a graph of the function
reveals a maximum at x = 0 and thus
y
(2)
()
y
(2)
(0)
0.25
Thus the we estimate the maximum error bound to be
|R(x)| =
y
(2)
()
2!
1
i=0
(x x
i
)
20
and calculate it at x = 0.45 as
|R(0.45)|
0.506
2
(0.45 0) (0.45 0.6)
0.017
We see that the absolute error is indeed smaller than this maximum bound on
the error.
For the second approximation, we use every node x
0
, x
1
, x
2
. We calculate
the Lagrange coecient polynomials
L
0
(x) =
(x x
1
) (x x
2
)
(x
0
x
1
) (x
0
x
2
)
=
(x 0.6) (x 0.9)
(0 0.6) (0 0.9)
=
50
27
_
x
2
15
10
x +
27
50
_
y
0
= 1
L
1
(x) =
(x x
0
) (x x
2
)
(x
1
x
0
) (x
1
x
2
)
=
(x 0) (x 0.9)
(0.6 0) (0.6 0.9)
=
50
9
_
x
2
9
10
x
_
y
1
=
_
16
10
=
4
10
1.265
L
2
(x) =
(x x
0
) (x x
1
)
(x
2
x
0
) (x
2
x
0
)
=
(x 0) (x 0.6)
(0.9 0) (0.9 0.6)
=
100
27
_
x
2
6
10
x
_
y
2
=
_
19
10
1.378
and determine the interpolating polynomial to be
P
2
(x) =
1
i=0
y
i
L
i
(x)
= y
0
L
0
(x) +y
1
L
1
(x) + y
2
L
2
(x)
= (1)
_
50
27
_
x
2
15
10
x +
27
50
__
+
_
_
16
10
_
_
50
9
_
x
2
9
10
x
__
+
_
_
19
10
_
_
100
27
_
x
2
6
10
x
__
=
_
50
27
50
9
_
16
10
+
100
27
_
19
10
_
x
2
+
_
25
9
+ 5
_
16
10
20
9
_
19
10
_
x + 1
1 + 0.484x 0.07x
2
21
0.9
1
1.1
1.2
1.3
1.4
1.5
0 0.2 0.4 0.6 0.8 1
./lagrange-data.dat
1+0.442*x
-0.07*x**2 + 0.484*x + 1
Figure 7: Graphical representation of f(x) =
1 +x, P
1
(x) and P
2
(x).
Approximating f(0.45) we nd P
2
(0.45) 1.204 with absolute approximation
error |f(0.45) P
2
(0.45)| 5.345 10
4
. The error bound is found using
formula (5.4) in the notes with
y
(3)
(x) =
3
8 (1 +x)
5
2
.
We want to choose such that y
(3)
(x) is a maximum on the interval [0, 0.9].
We see that y
(3)
(0) = 0.375 and y
(3)
(0.9) 0.075. Analysis or a graph of the
function reveals a maximum at x = 0 and thus
y
(3)
()
y
(3)
(0)
0.375
Thus the we estimate the maximum error bound to be
|R(x)| =
y
(3)
()
3!
2
i=0
(x x
i
)
0.375
6
(0.45 0) (0.45 0.6) (0.45 0.9)
0.00189
Again, we see that the absolute error is indeed smaller than this maximum
bound on the error.
22
Exercise: Do similar derivations and analysis, as in the above example, for
y(x) = ln(1 +x) and y(x) = cos x.
Least-squares polynomial tting
Example 8. Find a linear least squares t to the data given in table 5 (question
taken from [1]).
i x
i
y
i
0 0 1
1 0.25 1.2840
2 0.5 1.6487
3 0.75 2.117
4 1 2.7183
Table 5
Solution: We use equations (5.13) in the textbook notes to obtain such a linear
t. According to the equations we need
x
i
,
y
i
,
x
2
i
and
x
i
y
i
, which
can be easily determined from table 5 and for completeness the results are given
in table 6.
i x
i
y
i
x
2
i
x
i
y
i
0 0 1 0 0
1 0.25 1.2840 0.0625 0.321
2 0.5 1.6487 0.25 0.82435
3 0.75 2.117 0.5625 1.58775
4 1 2.7183 1 2.7183
i
x
i
=
i
y
i
a
0
i
x
i
+a
1
i
x
2
i
=
i
x
i
y
i
one obtains the set of linear equations
5a
0
+ 2.5a
1
= 8.768
2.5a
0
+ 1.875a
1
= 5.4514
in the unknown variables a
0
and a
1
. Solving this system of equations yields
a
0
0.89968 and a
1
1.70784
which in turn gives us the least-squares polynomial
p(x) = 1.70784x + 0.89968
Figure 8 compares this polynomial to the original data set.
23
0.6
0.8
1
1.2
1.4
1.6
1.8
2
2.2
2.4
-0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
./least-squares-ex1.dat
1.70784*x+0.89968
Figure 8
Exercise Calculate the variance for the above example.
Example 9. Find a linear least squares t to the data given in table 7 (ques-
tion taken from [1]). Calculate the variance after youve found the least-squares
polynomial.
i x
i
y
i
0 0 1
1 0.15 1.004
2 0.31 1.031
3 0.5 1.117
4 0.6 1.223
4 0.75 1.422
Table 7
Solution: The complete data set which we will need for the least-squares poly-
nomial tting is given in table 8. We need to solve the set of linear equations
given by
6a
0
+ 2.31a
1
= 6.797
2.31a
0
+ 1.2911a
1
= 2.82901
in the unknown variables a
0
and a
1
. Solving this system of equations yields
a
0
0.929514 and a
1
0.528102
24
i x
i
y
i
x
2
i
x
i
y
i
0 0 1 0 0
1 0.15 1.004 0.0225 0.1506
2 0.31 1.031 0.0961 0.31961
3 0.5 1.117 0.25 0.5585
4 0.6 1.223 0.36 0.7338
5 0.75 1.422 0.5625 1.0665
2
=
i
2
i
n m
=
i
[p(x
i
) y
i
]
2
5 1
0.006145.
25
Exercise Fit a quadratic least-squares polynomial to the data set in the above
example.
6 Numerical Dierentiation
Example 10. Derive
y
i
=
y
i+2
4y
i+1
+ 6y
i
4y
i1
+y
i2
h
4
,
which is a central-dierence equation with error of order O(h
2
).
Solution: We start by listing the Taylor series expansions in dierent subscripts
of y, i.e.
y
i+1
= y
i
+hy
i
+
h
2
2
y
i
+
h
3
6
y
i
+
h
4
24
y
i
+
h
5
5!
y
(5)
i
+O(h
6
)
y
i1
= y
i
hy
i
+
h
2
2
y
i
h
3
6
y
i
+
h
4
24
y
i
h
5
5!
y
(5)
i
+O(h
6
)
y
i+2
= y
i
+ 2hy
i
+
4h
2
2
y
i
+
8h
3
6
y
i
+
16h
4
24
y
i
+
32h
5
5!
y
(5)
i
+O(h
6
)
y
i2
= y
i
2hy
i
+
4h
2
2
y
i
8h
3
6
y
i
+
16h
4
24
y
i
32h
5
5!
y
(5)
i
+O(h
6
)
Adding the rst two equations together and the last two equations together will
get rid of all the odd derivatives. So one should obtain
y
i+1
+y
i1
= 2y
i
+h
2
y
i
+
h
4
12
y
i
+O(h
6
)
and
y
i+2
+y
i2
= 2y
i
+ 4h
2
y
i
+
4h
4
3
y
i
+O(h
6
).
We now want to combine these equations we obtained in some fashion such
that it removes the second derivative. Hence, we take the second equation and
subtract four times the rst equation, that is
y
i+2
+y
i2
4(y
i+1
+y
i1
) = (2y
i
8y
i
) +
_
4h
4
3
4h
4
12
_
y
i
+O(h
6
)
y
i+2
+y
i2
4y
i+1
4y
i1
= 6y
i
+
_
4h
4
3
h
4
3
_
y
i
+O(h
6
)
y
i+2
+y
i2
4y
i+1
4y
i1
= 6y
i
+h
4
y
i
+O(h
6
)
We can now solve this last equation for y
i
to nd
h
4
y
i
= y
i+2
4y
i+1
+ 6y
i
4y
i1
+y
i2
+ O(h
6
)
y
i
=
y
i+2
4y
i+1
+ 6y
i
4y
i1
+y
i2
h
4
+
O(h
6
)
h
4
y
i
=
y
i+2
4y
i+1
+ 6y
i
4y
i1
+y
i2
h
4
+O(h
2
)
26
and thus
y
i
y
i+2
4y
i+1
+ 6y
i
4y
i1
+y
i2
h
4
.
7 Numerical Integration (Quadrature)
Example 11. Evaluate
_
2
0
1
x + 4
dx,
with tolerance = 10
4
using the composite Trapezium rule.
Solution: Using the given information, we can obtain the step size we require
to evaluate to the required tolerance, i.e.
h
2
12
(2 0) M
< = 10
4
where
M = max
[0,2]
|f
(x)|
and
f
(x) =
2
(x + 4)
3
.
From the rst derivative test, we know that f
h
2
6
(0.03125)
< 0.0001
h
2
<
6 0.0001
0.03125
h 0.13856
and using this value of h leads us to estimate of the amount of nodes we shall
be needing, i.e.
N
b a
h
2 0
0.13856
= 14.43418.
Hence, we shall use N = 15 nodes and we recalculate the step size for this to be
h = 0.13333. We list the nodes and corresponding function values in table 9.
The formula for the composite Trapezium rule is as follows
_
b
a
f(x) dx
h
2
_
y
0
+y
N
+ 2
N1
k=1
y
k
_
and with the data in table 9 one obtains
_
2
0
1
x + 4
dx 0.40552.
27
i x
i
f(x
i
)
0 0 0.25
1 0.133333 0.241935
2 0.266667 0.234375
3 0.4 0.227273
4 0.533333 0.220588
5 0.666667 0.214286
6 0.8 0.208333
7 0.933333 0.202703
8 1.06667 0.197368
9 1.2 0.192308
10 1.33333 0.1875
11 1.46667 0.182927
12 1.6 0.178571
13 1.73333 0.174419
14 1.86667 0.170455
15 2 0.166667
Table 9
Exercise Devise a tolerance and step size to evaluate
_
2
0
1
x + 4
dx,
using only 5 nodes with the composite Simpsons rule.
Example 12. Evaluate
_
1
0.5
x
4
dx,
with tolerance = 10
4
using the composite Trapezium rule.
Solution: Using the given information, we can obtain the step size we require
to evaluate to the required tolerance, i.e.
h
2
12
(1 0.5) M
< = 10
4
where
M = max
[0.5,1]
|f
(x)|
and
f
(x) = 12x
2
.
From the rst derivative test, we know that f
(x) = 0 to nd
an extrema at x = 0. However, this critical point lies outside our integration
interval and hence we do not consider it. Substitution of the endpoints of the
28
integration interval yields f
(0.5) = 3 and f
0.5h
2
< 0.0001
h
2
<
0.0001
0.5
h 0.0141
and using this value of h leads us to estimate of the amount of nodes we shall
be needing, i.e.
N
b a
h
1 0.5
0.0141
= 35.461
Hence, we shall use N = 36 nodes and we recalculate the step size for this to
be h = 0.0139. We list the nodes and corresponding function values in table 10.
From this we compute
_
1
0.5
x
4
dx 0.19381,
which we can compute the the actual value of the integral, found to be
_
1
0.5
x
4
dx =
x
5
5
1
0.5
=
1
5
0.00625 0.19375
yielding an actual error of || = 0.00006.
29
i x
i
f(x
i
)
0 0.5 0.0625
1 0.513889 0.0697392
2 0.527778 0.0775898
3 0.541667 0.0860852
4 0.555556 0.0952599
5 0.569444 0.105149
6 0.583333 0.115789
7 0.597222 0.127217
8 0.611111 0.13947
9 0.625 0.152588
10 0.638889 0.16661
11 0.652778 0.181577
12 0.666667 0.197531
13 0.680556 0.214513
14 0.694444 0.232568
15 0.708333 0.251739
16 0.722222 0.272072
17 0.736111 0.293612
18 0.75 0.316406
19 0.763889 0.340503
20 0.777778 0.36595
21 0.791667 0.392798
22 0.805556 0.421097
23 0.819444 0.450898
24 0.833333 0.482253
25 0.847222 0.515216
26 0.861111 0.549841
27 0.875 0.586182
28 0.888889 0.624295
29 0.902778 0.664238
30 0.916667 0.706067
31 0.930556 0.749841
32 0.944444 0.79562
33 0.958333 0.843464
34 0.972222 0.893433
35 0.986111 0.945591
36 1 1
Table 10
30
References
[1] Burden, R.L. & Faires, J.D., Numerical Analysis, 9th edition, Cengage
Learning, 2010.
[2] Burden, R.L. & Faires, J.D., Numerical Analysis, 8th edition, Thomson
Brooks/Cole, 2005.
31