0% found this document useful (0 votes)
9 views

Introduction To Computational Mathematics - An Outline

Uploaded by

fardeenrezakhan1
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Introduction To Computational Mathematics - An Outline

Uploaded by

fardeenrezakhan1
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 210

Introduction to

Computational Mathematics:
An Outline

William C. Bauldry

Professor Emeritus & Adjunct Research Professor


Dept of Mathematical Sciences
Appalachian State University
[email protected]
Copyright © WmCB, 2020. All rights reserved.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any
form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise,
except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without
the prior written permission of the Author.

Introduction to Computational Mathematics: An Outline, William C. Bauldry

Produced in the United States of America.

10 9 8 7 6 5 4 3 2 1

Cover photos: Mount Field National Park, Tall Trees Walk, Tasmania © WmCB, 2018.
Introduction to Computational Mathematics: An Outline

“1 + 1 = 3
for large enough
values of 1.”

Photo © WmCB, 2015


Introduction to Computational Mathematics: An Outline
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
I. Computer Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
II. Control Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
S I. Special Topics: Computation Cost and Horner’s Form . . . . . . . 65
III. Numerical Di↵erentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
IV. Root Finding Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
S II. Special Topics: Modified Newton’s Method . . . . . . . . . . . . . . . . 114
V. Numerical Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
VI. Polynomial Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
S III. Case Study: TI Calculator Numerics . . . . . . . . . . . . . . . . . . . . . . . 163
VII. Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
VIII. References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

Note: Light blue text links inside this Outline; grey text links to a web page. ICM i
Introduction to Computational Mathematics: An Outline

Photo © WmCB, 2015

On two occasions I have been asked, “Pray, Mr. Babbage, if you put into the
machine wrong figures, will the right answers come out?” ... I am not able
rightly to apprehend the kind of confusion of ideas that could provoke such a
question.” — Charles Babbage Passages from the Life of a Philosopher, p. 67.
ICM ii
Preface: Is Computation Important?
A simple computational error can cause very serious problems.

The Mars Climate Orbiter Crash


Two teams used di↵erent units of measure writing the Orbiter’s software:
NASA used the metric system, and Lockheed Martin used the Imperial system.
Read NASA’s Mars Climate Orbiter page.

The Ariane 5 Explosion


The European Space Agency’s Ariane 5 rocket exploded 37 seconds after
takeo↵; the explosion was caused by an integer overflow in the software used
for launching the rocket. Watch the launch video.

The Gulf War Dhahran “Scud” Missile Attack


The Patriot missile software measured time in binary and decimal based on the
system clock that used 10’s of a second, a non-terminating fraction in binary.
After running for over 100 hours, the system was inaccurate. Twenty-eight
U.S. soldiers died when the system failed to track a Scud missile fired from
Iraq. Read Michael Barr’s “Lethal Software Defects: Patriot Missile Failure”.
ICM iii
Preface: Three Factors of Computation
There are three aspects to keep in mind while studying computation. The
first two are always in tension with each other:

Accuracy versus Efficiency

Accuracy concerns how much error occurs Accuracy Efficiency


and how to control the error.
Efficiency concerns how much computation Error
is needed to produce a result. Balance

Stability is the third aspect to always keep in mind. In a stable


computation, a small change in inputs produces only a small change in
outputs. As a counterpoint, investigate instability in a “Lorenz attractor.”

Starting at [1, 1, 1]. Starting at [ 1, 2, 3].


ICM iv
Preface: A Computational Mathematics Course
This slide deck was developed while teaching Appalachian State’s MAT 2310,
Computational Mathematics (3 cr). The course was designed to introduce nu-
merical analysis and give students experience with basic programming structures
in a mathematical environment. A sample syllabus is
1. Computer Arithmetic . . . . . . 2 weeks 4. Root Finding Algorithms . . . 3 weeks
2. Control Structures . . . . . . . . . 2 weeks 5. Numerical Integration . . . . . 2 weeks
3. Numerical Di↵erentiation . . 3 weeks 6. Polynomial Interpolation . . . 2 weeks
Midterm Exam Team Project Posters . . . . . . 1 week
Final Exam
Programming was done with the computer algebra system Maple, to easily
adjust precision, and with Python. The Special Topics and Case Study come
in as time allows. A selection of group projects appears at the end of this Outline.

Thanks go to the students of MAT 2310 who lived through the development of
both a new course and these slides. Many thanks also go to my colleagues Greg
Rhoads, René Salinas, and Eric Marland who co-designed the course and gave
great feedback during the adventure.
— WmCB, Jan 2020
ICM v
I. Computer Arithmetic

Sections
1. Scientific Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2. Converting to Di↵erent Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3. Floating Point Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4. IEEE-754 Floating Point Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5. Maple’s Floating Point Representation . . . . . . . . . . . . . . . . . . . . . . . . . 18
6. Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

ICM 1 – 201
I. Computer Arithmetic: Scientific Notation
Definitions of Scientific Notation
Normalized: Any numeric value can be written as
d0 .d1 d2 d3 . . . dn ⇥ 10 p
where 1  d0  9.
Engineering: Any numeric value can be written as
n.d1 d2 d3 . . . dm ⇥ 10q
where 1  n  999 and q is a multiple of 3.

Examples (NIST’s ‘Values of Constants’)


• Speed of light in a vacuum: 2.997 924 58 ⇥ 108 m/s
• Newtonian constant of gravitation: 6.673 84 ⇥ 10 11 m3 /(kg · s2 )
• Avogadro’s number: 6.022 141 ⇥ 10 23 mol 1
• Mass of a proton: 1.672 621 777 ⇥ 10 27 kg
• Astronomical unit: 92.95 580 727 ⇥ 106 mi
ICM 2 – 201
Conversions

Basic Base Transmogrification: Integers


Binary ! Decimal Decimal ! Binary
(Vector version) (Algebra version)
Think of the binary number as a Successively compute the bits
vector of 1’s and 0’s. Use a dot (from right to left)
product to convert to decimal. 1. bit = x mod 2
1. x2 = 101110 then set x = bx/2c
2. x10 = 2. Repeat until x = 0
h 1 0 1 1 1 0 i E.g., x10 = 46
· h 25 24 23 22 21 20 i b0 = 0; then set x = 23
b1 = 1; x = 11
3. x10 = 25 + 23 + 22 + 21 b2 = 1; x=5
= 46 b3 = 1; x=2
b4 = 0; x=1
b5 = 1; x=0
Whence x2 = 101110
ICM 3 – 201
Conversions

Basic Base Transmogrification: Fractions


Binary ! Decimal Decimal ! Binary
(Vector version) (Algebra version)
Think of the binary number as a Successively compute the bits
vector of 1’s and 0’s. Use a dot (from left to right)
product to convert to decimal. 1. bit = b2xc
1. x2 = 0.10111 then set x = frac(2x)
2. x10 = 2. Repeat until x = 0 (or when
reaching maximum length)
h 1 0 1 1 1 i
· h2 12 22 32 42 5i E.g., x10 = 0.71875
b 1 = 1; then set x = 0.43750
3. x10 = 2 1 + 2 3 + 2 4 +2 5 b 2 = 0; x = 0.87500
= 0.71875 b 3 = 1; x = 0.75000
b 4 = 1; x = 0.50000
b 5 = 1; x = 0.0 Stop
Whence x2 = 0.10111
ICM 4 – 201
Conversions

Terminating Expansions?
When does a fraction’s expansion terminate?
n n
Base 10: A decimal fraction terminates when r = = p p.
10 p 2 ·5
m
Base 2: A binary fraction terminates when r = p .
2

Examples
1
1. 10 = 0.110 = 0.0 00112
1
2. 3 = 0.310 = 0.012
p . .
3. 2 = 1.414 213 562 373 095 048 810 = 1.0110 1010 0000 1001 1112
. .
4. p = 3.141 592 653 589 793 238 510 = 11.0010 0100 0011 1111 012

ICM 5 – 201
Conversions

Examples (Convert A Repeating Binary Expansion)


Convert n = 0.0101 1011 01 · · · = 0.0 1012 to decimal.
1. Convert the repeating block to decimal
1012 = 510

2. Rewrite n in “powers-of-two” notation


4 7 10 13
n = 5·2 +5·2 +5·2 +5·2 +···

3. Express n as a geometric series



n = 5·2 4
·Â2 3k
k=0

4. And sum the series


4 1 5
n = 5·2 · 3
=
1 2 14
ICM 6 – 201
Binary Coded Decimal
BCD
The digits 0 to 9 can be represented with four binary bits:
x x x x
8 4 2 1
For example, 9310 would be
9 3 93
z }| { z }| { z }| {
BCD: 1 0 0 1 0 0 1 1 vs. Binary: 0 1 0 1 1 1 0 1
8 4 2 1 8 4 2 1 128 64 32 16 8 4 2 1

Advantages Disadvantages
• Eliminates some • Fewer numbers per 8
repeating expansions bits (100/256 ⇡ 39%)
• Rounding is simpler • Complicated arithmetic
• Displaying values is routines
easier • Slower to compute
Nearly all calculators use BCD formats.
ICM 7 – 201
Floating Point Numbers
Definition (Floating Point Representation)
A number x is represented (and approximated) as
.
x = s ⇥ f ⇥be p
where
s : sign ±1, f : mantissa, b : base, usually 2, 10, or 16
e: biased exponent (shifted), p: exponent’s bias (shift)
The standard floating point storage format is
s e f

Exponent Bias
The bias value is chosen to give equal ranges for positive and negative
exponents without needing a sign bit. E.g., for an exponent with
• 8 bits: 0  e  255 = 28 1. Use p = 28/2 1 gives an exp range of
127  e 127  128.
• 11 bits: 0  e  2047 = 211 1. Use p = 211/2 1 = 1023 gives
1023  e 1023  1024.
ICM 8 – 201
Samples
Examples
1. 3.95 = ( 1)1 ⇥ 0.1234375 ⇥ 221 16

So s = 1, f = 0.1234375, b = 2, e = 21, and p = 16.


Note: ( 1)1 ⇥ 0.1234375 ⇥ 221 16 = 3.950, so err = 0.
Storage format: 1 21 0.1234375
2. 11/3 = ( 1)0 ⇥ 0.2291666667 ⇥ 1616384 16383
So s = 0, f = 0.2291666667, b = 16, e = 16384, and p = 16383.
Note: ( 1)0 ⇥ 0.2291666667 ⇥ 1616384 16383 = 3.6666666672,
so err = 5.3 · 10 10 .

Storage format: 0 16384 0.2291666667

3. 210 = 1024 = ( 1)0 ⇥ 0.250 ⇥ 1666 63


So s = 0, f = 0.250, b = 16, e = 66, and p = 63.
Note: ( 1)0 ⇥ 0.250 ⇥ 1666 63 = 1024.0, so err = 0.
Storage format: 0 66 0.2500000000

ICM 9 – 201
IEEE Standard for Floating-Point Arithmetic
Definition (IEEE-754)
Normalized Floating Point Representation (Binary)
.
Single precision: x = ( 1)s ⇥ (1. + f[23] ) ⇥ 2e[8] 127 (32 bit)

1 0 1 1 1 1 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

31 30 23 22 0

.
Double precision: x = ( 1)s ⇥ (1. + f[52] ) ⇥ 2e[11] 1023
(64 bit)

1 0 1 1 1 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

63 62 52 51 32
1 0 1 1 1 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

31 0

Online Floating-Point Converter


Original IEEE-754-1985, 2008 revision, 2019 revision ($)
ICM 10 – 201
IEEE Standard for Floating-Point Arithmetic, II

Single Precision Bit Patterns


Pattern Value
0 < e < 255 n=( 1)s ⇥ 2e 127 ⇥ 1. f

normal number
f =0 n=( 1)s ⇥ 0.0
e=0 all bits are zero signed zero
f 6= 0 n = ( 1)s ⇥ 2 126 ⇥ 0. f

at least 1 nonzero bit subnormal number

f =0 s =0 +INF plus infinity


e = 255 s =1 INF minus infinity
f 6= 0 NaN ‘Not-a-Number’

ICM 11 – 201
Big and Small & Gaps
IEEE-754 Largest and Smallest Representable Numbers
Precision Digits Max Exp Smallest # Largest #
Single ⇡9 ⇡ 38.2 ⇡ 1.18 · 10 38 ⇡ 3.4 · 1038
Double ⇡ 17 ⇡ 307.95 ⇡ 2.225 · 10 307 ⇡ 1.798 · 10308

Gaps in the Floating Point Number Line


The size of the gap between consecutive floating point numbers gets larger as
the numbers get larger.

under- under-
overflow usable range usable range overflow
flow flow

realmax realmin 0 realmin realmax


3.4 · 1038 1.18 · 10 38 +1.18 · 10 38 +3.4 · 1038
| {z }
each tickmark represents one floating point number

ICM 12 – 201
Machine Epsilon
Definition
The machine epsilon e is the largest value such that
1+e = 1
for a given numeric implementation.

Example (Single Precision [using Java ])


wmcb:I cat machineEpsilon.java
class mEps {
public static void main(String[] args) {
float machEps = 1.0f;
do {
machEps /= 2.0f;
} while ((float)(1.0 + (machEps/2.0)) != 1.0);
System.out.println("Calculated machine epsilon: " + machEps);
}
}
wmcb:I javac machineEpsilon.java
wmcb:I java mEps
Calculated machine epsilon: 1.1920929E-7 =) es ⇡ 1.192 · 10 7

ICM 13 – 201
Machine Epsilon, II
Example (Double Precision [using Java ])
wmcb:I cat machineEpsilonD.java
class mEpsD {
public static void main(String[] args) {
double machEps = 1.0d;
do {
machEps /= 2.0d;
} while ((double)(1.0 + (machEps/2.0)) != 1.0);
System.out.println("Calculated machine epsilon: " + machEps);
}
}
wmcb:I javac machineEpsilonD.java
wmcb:I java mEpsD
Calculated machine epsilon: 2.220446049250313E-16 =) ed ⇡ 2.22 · 10 16

Single Precision Double Precision


es ⇡ 1.192 · 10 7 ed ⇡ 2.22 · 10 16

ICM 14 – 201
Machine Epsilon, III
Example (Using Python 3)
>>> macEps = 1.0
>>> while (1.0 + macEps) != 1.0:
macEps = macEps/2.0

>>> macEps
1.1102230246251565e-16
>>>
>>> import numpy as np
>>> macEpsL = np.longdouble(1.0)
>>> while (1.0 + macEpsL) != 1.0:
macEpsL = macEpsL/2.0

>>> macEpsL
5.42101086242752217e-20
>>>
>>> np.finfo(np.longdouble)
finfo(resolution=1.0000000000000000715e-18,
min=-1.189731495357231765e+4932, max=1.189731495357231765e+4932,
dtype=float128)

Note: Python’s float defaults to double precision. ICM 15 – 201


Large Value Floating Point Gap

Example (Double Precision [using Java ])


• Approximate the gap to the next floating point value above 1030 .
wmcb:I cat FPGap.java
class BigGap {
public static void main(String[] args) {
float gap = 1e23f;
float n = 1e30f;
do {
gap /= 2.0;
} while ((float)(n+(gap/2.0)) != n);
System.out.println("Approximate gap: " + eps);
}
}
wmcb:I javac FPGap.java
wmcb:I java BigGap
Approximate gap: 5.0E22

ICM 16 – 201
Properties

Floating Point Arithmetic Properties


Commutative: Addition is commutative:
n1 + n2 = n2 + n1

Multiplication is commutative:
n1 ⇥ n2 = n2 ⇥ n1

NonAssociative: Addition is not associative:


(n1 + n2 ) + n3 6= n1 + (n2 + n3 )

Multiplication is not associative:


(n1 ⇥ n2 ) ⇥ n3 6= n1 ⇥ (n2 ⇥ n3 )

NonDistributive: Multiplication does not distribute over addition:


n1 ⇥ (n2 + n3 ) 6= (n1 ⇥ n2 ) + (n1 ⇥ n3 )

ICM 17 – 201
Maple’s Floating Point Representation

Maple’s Floating Point Implementation


Maximum exponent = 9223372036854775806
Minimum exponent = 9223372036854775806
Maximum ‘float’ = 1. ⇥ 109223372036854775806
Minimum ‘float’ = 1. ⇥ 10 9223372036854775806
Maximum digits = 38654705646
Maximum binary power = 4611686018427387903

Example (Maple’s Floating Point Structure)


> N := evalf(Pi, 20):
dismantle(N)
FLOAT(3): 3.1415926535897932385
INTPOS(6): 31415926535897932385
INTNEG(2): -19

ICM 18 – 201
Floating Point Rounding

IEEE-754 Rounding Algorithms1


Rounding to Nearest
• Round to nearest, ties to even (default for binary floating-point)

• Round to nearest, ties to odd

• Round to nearest, ties away from zero (used by Maple and Matlab)

Directed Roundings
• Round toward 0 — truncation

• Round toward +• — rounding up or ceiling: dxe

• Round toward • — rounding down or floor: bxc

Team Project: Implement Stochastic Rounding in Maple or python.

ICM 19 – 201
1 See EE Times’ “Rounding Algorithms”
Error

Defining Error
Absolute Error: The value errabs = |actual approximate|
|actual approximate| errabs
Relative Error: The ratio errrel = =
|actual| |actual|

Example (Weighty Thoughts)


A long-tailed field mouse normally weighs up to about 50 g. Suppose a
lab-tech makes an error of 2.5 g (⇡ a penny ) when weighing a mouse. The
relative error is 2.5 g
errrel = = 5%
50 g

A mature African bush elephant normally weighs about 6.5 tons. Suppose a
zoo-keeper makes an error of 50 # (⇡ 7 yr old boy ) weighing an elephant. The
relative error is
50 # .
errrel = = 0.4%
13000 #
ICM 20 – 201
Error Accumulates
Adding Error
Add 1 + 12 + 13 + 14 + · · · + 1016 forwards and backwards with 6 digits.
Maple
Digits := 6:
N:= 106 :
Sf := 0; Sb := 0;
for i from 1 to N do for j from N to 1 by -1 do
Sf := Sf +(1.0/i); Sb:= Sb +(1.0/j);
end do: end do:
Sf; Sb;
10.7624 14.0537
106
1
The correct value of Âk to 6 significant digits is 14.3927 .
k=1

relative error(S f ) ⇡ 25.2%, relative error(Sb ) ⇡ 2.4%


What happened?

ICM 21 – 201
Error Accumulates
Subtracting Error
Solve for x: 1.22x2 + 3.34x + 2.28 = 0 (3 digit, 2 decimal precision)
p
2
b ± b 4ac
The quadratic formula r± = can lead to problems.
2a
Using the formula directly: But the exact roots are:
p
b2 = 11.2 167 ± 73
R± =
4ac = 11.1 122
p .
b2 4ac = 0.32 = 1.30, 1.44
r+ , r = 1.24, 1.50
The relative error is ⇡ 5%.
“Rationalize the numerator” to eliminate a bad subtraction:
p
b b2 4ac 2c
R = = p
2a b + b2 4 ac
ICM 22 – 201
More Error Accumulates
Even Worse Subtraction Error
Solve for x: 0.01x2 1.00x + 0.02 = 0 (3 digit, 2 decimal precision)
Again using the quadratic But the real roots are:
formula directly: .
. R± = 99.98, 0.02
4ac = 0.0008 = 0.00
p .
b2 4ac = 1.00 The relative errors are
.
r± = 100., 0.00 errrel ⇡ 0.02% & 100% !
Again, “rationalize the numerator” to eliminate a bad subtraction:
p
b b2 4ac 2c
R = = p
2a b + b2 4 ac
p
b b2 4ac . 2c .
= 0.00 but p = 0.02
2a 2
b + b 4 ac

ICM 23 – 201
‘Accuracy v Precision’
‘On Target’ v ‘Grouping’
Accuracy: How closely computed values agree with the true value.

Precision: How closely computed values agree with each other.

Precise and Accurate

Not Precise Precise but


but Accurate Not Accurate

Not Precise and Not Accurate


ICM 24 – 201
‘Roundo↵ v Truncation’
Computational v Formulaic
Roundo↵: Error from floating point arithmetic (fixed number of
digits)

Truncation: Error from formula approximation (dropping terms)

Examples
6 5 11 ?
• Roundo↵ 10 + 10 = 10 () 1. + 1. = 1.

4 5 6 7 8 ?
10 + 10 + 10 + 10 + 10 = 3 () 0. + 1. + 1. + 1. + 1. = 3.


q 2k 1 1 3
• Truncation sin(q ) = Â (2k 1)! () sin(q ) ⇡ q 6q
k=1

• n n
tan(q ) = Â ( 1)n B2n 4(2n)!
(1 4 ) 2n
q 1 () tan(q ) ⇡ q + 13 q 3
k=1

ICM 25 – 201
Landau Notation
“Big-O”
We use Landau’s notation to describe the order of terms or functions:
Big-O: If there is a constant C > 0 such that | f (x)|  C · |g(x)| for
all x near x0 , then we say f = O(g) [that’s “ f is ‘big-O’
of g”].2

Examples
1. For x near 0, we have sin(x) = O(x) and sin(x) = x + O(x3 ).
2. If p(x) = 101x7 123x6 + x5 15x2 + 201x 10, then
• p(x) = O(x7 ) as x ! •.
• p(x) = O(1) for x near 0.

3. As x ! •, is xn = O(ex ) for every n 2 N ?


4. As x ! •, is ln(x) = O(x1/n ) for every n 2 N ?

ICM 26 – 201
2 Link to I Further big-O info.
Exercises, I
Problems
Scientific Notation Floating Point Numbers
1. Convert several constants at 7. Convert 31.387510 to floating
NIST to engineering notation. point format with base b = 10
and bias p = 49.
Converting Bases
8. Convert from floating point
2. Convert to decimal: 101110, format with base b = 2 and
101 ⇥ 210, 101.0111, 1110.0 01 bias p = 127:
3. Convert to binary (to 8 1 12610 514110
places):
p 105 , 1/7, 1234.4321,
9. Why is the gap between
p, 2.
successive values larger for
4. Express 831.22 in BCD form. bigger numbers when using a
fixed number of digits?
5. Write the BCD number
1001 0110 0011.1000 0101 in 10. Give an example showing that
decimal. floating point arithmetic is not
distributive (mult over add).
6. Investigate converting bases
by using synthetic division.
ICM 27 – 201
Exercises, II
Problems
IEEE-754 Standard Error
11. Write 20/7 in single precision 17. The US Mint specifies that
format. In double precision. quarters weigh 5.670 g. What
is the largest acceptable
12. Convert the single precision # weight, if the relative error
0 10000111 0010010...0 must be no more than 0.5%?
to decimal.
18. Find the relative error when
13. Chart double precision bit adding 1 + 12 + · · · + 1015 using
patterns. 5 digit arithmetic.

14. Describe a simple way to test 19. Show that cos(x) = O(1) for x
if a computation result is near 0.
either infinite or NaN.
20. Let p be a polynomial with
15. What is the purpose of using n = degree(p). Find k so that
round to nearest, ties to even? a. p(x) = O(xk ) as x ! •.
b. p(x) = O(xk ) for x ⇡ 0.
16. Explain the significance of the
machine-epsilon value.
ICM 28 – 201
II. Control Structures
Sections
1. Control Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

2. A Common Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3. Control Structures Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1. Excel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2. Maple [ Sage / Xcas ] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3. MATLAB [ FreeMat / Octave / Scilab ] . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4. C and Java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5. TI-84 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6. R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
7. Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

4. From Code to Flow Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59


Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

Reference Sheet Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63


ICM 29 – 201
II. Control Structures: Flow
Flow Control
Conditional Statements. A condition determines an action:
If the condition is true, then do an action.
If the condition is not true3 , then do a di↵erent action.
E.g.,
• If a number is even, divide it by 2. Otherwise mult by 3 & add 1.
• If error is less than 10 5 , stop. If not, reapply Newton’s method.

Repeating Blocks / Loops: Repeat an action a specified number of


times (NB: Loops embed a conditional):
Count to a value doing an action each time.
E.g.,
• Add the first 20 prime numbers.
• Starting at t = 0; y0 = 0, use Euler’s method to find y(1) when y0 =t.

3 Is
there a di↵erence between not true and false? See, e.g., Intuitionistic Logic at
ICM 30 – 201
the Stanford Encyclopedia of Philosophy.
Types of Conditional Statements
Basic Conditional Types
Simple: IF condition THEN do action for true
IF condition THEN do action for true ELSE do action for false

Compound: IF condition1 THEN do action1


ELSE IF condition2 THEN do action2
ELSE IF condition3 THEN do action3
...
ELSE do actionn when all conditions are false

Example (NC 2011–2013 Tax Rate Schedule4 )


IF your filing status is single;
and taxable income is but not over: your tax is:
more than:
$0 $12,750 6% OF THE NC TAXABLE INCOME
AMOUNT ON FORM D-400
$12,750 $60,000 $765 + 7% OF THE AMOUNT OVER
$12,750
$60,000 ... $4,072.50 + 7.75% OF THE AMOUNT
OVER $60,000

ICM 31 – 201
4 In 2014, NC converted to a regressive “flat tax” currently at 5.25% (2019).
Types of Loops

Loop Types
Counting Loops: For loops perform an action a pre-specified number of
times.

Condition Loops: While loops perform an action as long as a given


condition is true.

Examples
• For each employee, calculate their monthly pay.
• For each integer i less than n, compute the ith number in the
Fibonacci sequence.
• While the current remainder in the Euclidean algorithm is greater
than 1, calculate the next remainder.
• While the game isn’t over, process the user’s input.

ICM 32 – 201
Example: Collatz Flow Chart

‘Collatz’ Function
• Start with an integer Collatz(n))

greater than 1. If it’s a#:=)n#


even, divide it by 2.
j):=)1#
Otherwise, multiply it
by 3 then add 1.
No#
Repeat until the value Is)a)>)1?)

reaches 1 counting the Yes#


number of steps.
Yes# No#
Is)a)even?)
A program to calculate
the number of steps
requires a loop with a a#:=)a)÷)2# a#:=)3a+1#

conditional inside.
j):=)j)+)1#

XKCD: Collatz Conjecture


Return(j))

ICM 33 – 201
Example: Collatz — A Loop and a Conditional
Pseudo-Code
We see a conditional loop in the There is an ‘if’ statement inside the
Collatz function’s flow chart: loop to calculate the new term:

while (the term > 1) do If (the term is even) then


Calculate the next term divide by 2
end do else
multiply by 3, then add 1
end if
Putting these together gives:
Get the first term
Set the term counter to 1
while (the term > 1) do
If (the term is even) then divide by 2
else multiply by 3, then add 1
end if
Increment the term counter
end do
Return the term counter
ICM 34 – 201
Example: Euler’s Method as a Loop
Euler’s Method
The solution to a di↵erential equation y0 = f (x, y) can be approximated
using the di↵erential triangle. Calculate
(xk+1 , yk+1 )
the next point from the current point
(xk , yk ) by following the tangent line for (xk , yk )
y = h · y (xk )
a step Dx = h. Then the new point is x=h
y(x)
(xk+1 , yk+1 ) = (xk + h, yk + h · y0 (xk , yk )).

Implement Euler’s method as a loop:


Define the derivative function y0 = f (x, y)
Get the initial point (x0 , y0 )
Get the stepsize h
Determine the number of steps n = (b a)/h
for i from 1 to n do
Compute xk+1 = xk + h and yk+1 = yk + h · y0 (xk , yk )
end do
Return the collection of points {(xi , yi )}ni=0

ICM 35 – 201
Control Structures Example: Flowchart

A Common Sample Problem


Start
1. List the squares of the
j=1
first 5 integers showing
k = j^2
which are even and
which are odd. Is
True
k mod 2 = print: k is even
0?

• Use a for loop to step False

print: k is odd
through the integers and
j = j+1
an if-then conditional
to test for even or odd. Is True
j≤5 ?

False

Stop

ICM 36 – 201
Control Structures Examples: Diagram

Start%

Loop% STOP%
START% UPDATE%
False!
j!=!1% j!=!j!+!1! Is!j%≤!5?!

Statements!
k!=!j2%
True%
False!
Is!
print!
Statements% k!mod!2!!
k!is!odd!
=!0?!
True%
print!
k!is!even!

End%
ICM 37 – 201
Excel Control Structures

Conditional Statements
If: = IF(condition, true action, false action)
[Can nest up to 7 IFs: =IF(condition1 , IF(condition2 , . . . , . . . ), . . . )]
[But I’ve nested 12 deep without problems...]

Case: = CHOOSE(index, case 1, case 2, . . . )


[Maximum of 29 cases) (see also: LOOKUP)]

Note
Many versions of Excel include Visual Basic for Applications (VBA), a small
programming language for macros. VBA includes a standard if-then-
else/elseif-end if structure. (See the Excel Easy web tutorial.)

ICM 38 – 201
Excel Control Structures

Loop Structures
For: N/A (must be programmed in VBA)

While: N/A (must be programmed in VBA)

View:
• Excel (Mac) website • Excel (Win) website

ICM 39 – 201
Excel Control Structures Example

ICM 40 – 201
Maple Control Structures

Conditional Statements
If: if condition then statements;
else statements;
end if

if condition 1 then statements;


elif condition 2 then statements;
else statements;
end if

Case: N/A (use piecewise or if-elif-end if)

ICM 41 – 201
Maple Control Structures
Loop Structures
For: for index from start value |1 to end value
by increment |1 do
statements;
end do
for index in expression sequence do
statements;
end do
While: while condition do
statements;
end do

View:
• Maple website • Maple Online Help

(See also: the Sage, Xcas, and TI Nspire, Maxima, or Mathematica’s website.)
ICM 42 – 201
Maple Control Structures Example

ICM 43 – 201
MATLAB1 Control Structures

Conditional Statements
If: if condition; statements; else statements; end
if condition; statements; elseif condition; statements;
else statements; end

Case: switch index (Scilab uses select)


case value1
statements;
case value2
statements;
..
.
otherwise
statements;
end

1 FreeMat, Octave, and Scilab are FOSS clones of MATLAB. Also see GDL and R.
ICM 44 – 201
MATLAB / Octave / Scilab Control Structures

Loop Structures
For: for index = startvalue:increment:endvalue
statements
end

While: while condition


statements
end

View:
• MATLAB website • Scilab website
• Octave website • FreeMat website

ICM 45 – 201
MATLAB Control Structures Example

octave-3.4.0:1>
> for j = 1:1:5;
> k = j*j;
> if mod(k,2)== 0;
> printf("%d is even\n", k);
> else
> printf("%d is odd\n", k);
> end; % of if
> end; % of for
1 is odd
4 is even
9 is odd
16 is even
25 is odd
octave-3.4.0:2>

ICM 46 – 201
C / Java Control Structures

Conditional Statements
If: if (condition) {statements}

if (condition) {statements}
else {statements}

Case: switch (index) {


case 1: statements ; break;
case 2: statements ; break;
..
.
case n: statements ; break;
default: statements }

(See also: Lua.)


ICM 47 – 201
C / Java Control Structures

Loop Structures
For: for (initialize; test; update ) {statements}

While: while (condition ) {statements} “entrance condition” loop

do {statements} while (condition ) “exit condition” loop

View:
• C reference card • Java reference card

ICM 48 – 201
C Control Structures Example

wmcb> gcc -o cs eg cs eg.c


#include <stdio.h>
wmcb> ./cs eg
main() 1 is odd
{ int i, j; 4 is even
for (i=1; i<= 5; i++) 9 is odd
{ j = i*i; 16 is even
if ((j % 2)==0) 25 is odd
printf("%d is even\n", j);
else
printf("%d is odd\n", j);
}

return 0;
}

ICM 49 – 201
Java Control Structures Example

wmcb> javac cs eg.java


class cs eg { wmcb> java cs eg
public static void main(String[] args)
{ 1 is odd
int i, j; 4 is even
9 is odd
for (i=1; i<= 5; i++)
{ j = i*i; 16 is even
if ((j % 2)==0) 25 is odd
System.out.println(j+" is even");
else
System.out.println(j+" is odd");
}

}
}

ICM 50 – 201
TI-84 Control Structures

Conditional Statements
If: If condition: statement

If condition
Then
statements
Else
statements
End

Case: N/A (use a piecewise function or nested if statements)

ICM 51 – 201
TI-84 Control Structures

Loop Structures
For: For(index, start value, end value [, increment])
statements
End

While: While⇤ condition Repeat⇤⇤ condition


statements statements
End End
⇤ Loop while the condition is true; test condition at the beginning
⇤⇤ Loop until the condition is true; test condition at the end

View:
• TI Calculator website • TI-84 Guidebook links

ICM 52 – 201
TI-84 Control Structures Example

PRGM I NEW I 1:Create


New
PROGRAM
ODD
Name= CONTROL
9
:ClrHome EVEN
:For(J,1,5) 16
:J^2 ! K ODD
:If gcd(K,2)= 2 25
:Then Done
:Disp "EVEN", K
:Else TI-84+ SE Screen Capture
:Disp "ODD", K
:End
:End

ICM 53 – 201
R Control Structures

Conditional Statements
If: if(condition) {statements}

if(condition)
{statements}
else
{statements}

Case: switch (index, list)

ICM 54 – 201
R Control Structures

Loop Structures
For: for (variable in sequence)
{statements}

While: while⇤ (condition) repeat⇤⇤


{statements} {statements
if (exit condition) break
statements}
⇤ Loop while the condition is true; test condition at the beginning
⇤⇤ Loop until the condition is true; test condition inside the loop

View:
• The R Project for Statistical • The Comprehensive R Archive
Computing homepage Network — CRAN
ICM 55 – 201
R Control Structures Example

> for (j in 1:5){


+ k = j^2
+ if (k %% 2 == 0) {
+ cat(k, "is even\n")}
+ else {
+ cat(k, "is odd\n")}
+ }
1 is odd
4 is even
9 is odd
16 is even
25 is odd
>

ICM 56 – 201
Python Control Structures

Conditional Statements
if if(condition) {statements}

if(condition)
{statements}
else
{statements}

case: switch (index, list)

View the Python website.

ICM 57 – 201
Python Control Structures Example

>>> for j in range(1, 6):


k=j*j
if (k%2) == 0:
print(k, ’is even’)
else:
print(k, ’is odd’)

1 is odd
4 is even
9 is odd
16 is even
25 is odd
>>>

Note: indentation is critical in python.

ICM 58 – 201
From Code to a Flow Chart

Maple Loops
Build flow charts for the Maple code shown below:

H Algorithm 1. H Algorithm 2.
n := 12; n := 12;
r := 1; R := n;
for i from 2 to n do j := n 1;
r := r · i; while j > 1 do
end do: R := R · j;
r; j := j 1;
end do:
R;

What mathematical function are these routines calculating?

ICM 59 – 201
Exercises, I
Problems
1. Write an If-Then-ElseIf statement that calculates tax for a married
couple filing jointly using the 2011 NC Tax Table (before the “Flat Tax”).
a. In natural language d. In Matlab (Octave or Scilab)
b. In Excel e. In C (Java, python, or R)
c. In Maple f. On a TI-84

2. Implement the Collatz Flow Chart


a. In pseudo-code b. In Maple (as a function)

3. Write code that, given a positive integer n, prints the first n primes.
4. Give a Maple version of Euler’s method.
5. Write nested for loops that fill in the entries of an n ⇥ n Hilbert matrix
a. In Maple b. On a TI-84

6. How can a while loop be redesigned as a for loop?


7. How can a for loop be redesigned as a while loop?
ICM 60 – 201
Exercises, II

Problems
8. Make a flow chart for implementing the Euclidean algorithm to find
the GCD of two positive integers p and q.
9. Write code using the Euclidean algorithm to find the GCD of two
positive integers p and q.
10. Write a Maple or Matlab function that applies the Extended
Euclidean algorithm to two positive integers p and q to give the
greatest common divisor gcd(p, q) and to find integers a and b such
that a p + b q = gcd(p, q).
11. a. Make a flow chart for the Maple code shown in Flow Chart Problem
worksheet.
b. What does the code do?
c. Convert the Maple statements to
i. Matlab
ii. TI-84+

ICM 61 – 201
Exercises, III
Problems
12. The “9’s-complement” of a number x is the value needed to add to x to
have 9’s. Eg. the 9’s-complement of 3 is 6; of 64 is 35; etc.
a. Write a statement to calculate the 9’s-complement of an n digit number y; call the
result y9 .
b. Write an if-then statement that performs carry-around: if the sum of two n digits
numbers has an n + 1st carry digit, drop that digit and add 1 to the sum.
c. Let r > s be two n digit integers. Find s9 with a. Now perform carry-around on (r + s9 )
with b.
d. What simple arithmetic operation is equivalent to the result of c?

13. The compass heading CH for going from P1 = (lat1 , lon1 ) to


P2 = (lat2 , lon2 ) (other than the North or South poles) is given by
(
L cos 1 (lon2 lon1 ) < 0
CH(P1 , P2 ) =
2p L otherwise
⇣ ⌘
1 sin(lat2 ) sin(lat1 ) cos(d)
where L = cos sin(d) cos(lat1 )
and
d = cos 1 (sin(lat ) sin(lat ) + cos(lat ) cos(lat 2) cos(lon lon2 )).
1 2 1 1

ICM 62 – 201
Quick Reference Cards

Quick Reference Card Collection


• Maple 15 (Mac) • Excel 2011 (Mac)

• Maple 15 (Win) • Excel 2010 (Win)

• Maplesoft’s Online help • Microsoft’s Online help

• MATLAB • Wikiversity’s
Control Structures
• Scilab
• C
• Octave • Java

• TI-84 +: Algebra; • R
Trigonometry • Python 2.7, Python 3.2

ICM 63 – 201
S I. Special Topics: Computation Cost and Horner’s Form

Sections
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2. Horner’s Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

ICM 64 – 201
Special Topic: Computation Cost & Horner’s Form

Introduction to Cost
The arithmetic cost of computation is a measure of how much
‘mathematical work’ a particular expression takes to compute. We will
measure an expression in terms of the number of arithmetic operations it
requires. For example, we’ll measure the cost of computing the
expression
sin(2x4 + 3x + 1)
as
2 additions + 5 multiplications + 1 function call
for a total of 7 arithmetic operations plus a function call.

At a lower level, the time cost of a cpu instruction is the number of clock
cycles taken to execute the instruction. Current CPU’s5 are measured in
FLoating-point OPerations per Second or FLOPS. For example, the
eight-core Intel® Core™ i9 processor used in an iMac (19/2019) can
achieve over 235 gigaFLOPS = 1011 floating-point operations per second.

ICM 65 – 201
5 Current in early 2020, that is. See Moore’s Law.
Horner’s Form
Partial Factoring
William Horner studied solving algebraic equations and efficient forms for
computation. Horner observed that partial factoring simplified a polynomial
calculation. Consider:
Standard Form , Horner’s Form
1 + 2x = 1 + 2x
| {z } | {z }
1 add+1 mult 1 add+1 mult
2
1 + 2x + 3x = 1 + x · (2 + 3x)
| {z } | {z }
2 add+3 mult 2 add+2 mult
1 + 2x + 3x2 + 4x3 = 1 + x · (2 + x · [3 + 4x])
| {z } | {z }
3 add+6 mult 3 add+3 mult
2 3 4
1 + 2x + 3x + 4x + 5x = 1 + x · (2 + x · [3 + x · (4 + 5x)])
| {z } | {z }
4 add+10 mult 4 add+4 mult
2 3 4 5
1 + 2x + 3x + 4x + 5x + 6x = 1 + x · (2 + x · [3 + x · (4 + x · [5 + 6x])])
| {z } | {z }
5 add+15 mult 5 add+5 mult

What are the patterns?


ICM 66 – 201
Patterns

Two Patterns
If p(x) is an nth degree polynomial, the cost of computation in standard
form is O(n2 ). Using Horner’s form reduces the cost to O(n).

Example: Let p(x) = a0 + a1 x + a2 x2 + a3 x3 + a4 x4 + a5 x5 + a6 x6 .

Horner’s form: p(x) = a0 + x (a1 + x [a2 + x (a3 + x [a4 + x (a5 + a6 x)])]).


This factored form significantly reduces the work needed
to evaluate p at a given value of x.

Modified: ?(x) = a1 + x [2 a2 + x (3 a3 + x [4 a4 + x (5 a5 + 6 a6 x)])].


• What does this modification calculate in terms of p?
• What is the cost of this modification versus using its
standard form?

ICM 67 – 201
Further Reductions
Chebyshev’s Polynomials
Pafnuty L. Chebyshev worked in number theory, approximation theory,
and statistics. The special polynomials named for him are the Chebyshev
Polynomials Tn (x) that have many interesting properties. For example, Tn
is even or odd with n, oscillates between 1 and 1 on the interval [ 1, 1],
and also has all its zeros in [ 1, 1]. The Horner form of Tn is quite
interesting. Let u = x2 , then:
3x + 4x3 () x( 3 + 4x2 ) = x( 3 + 4u)

1 8x2 + 8x4 () 1 + u( 8 + 8u)

5x 20x3 + 16x5 () x(5 + u[ 20 + 16u])

1 + 18x2 48x4 + 32x6 () 1 + u(18 + u[ 48 + 32u])

7x + 56x3 112x5 + 64x7 () ?

1 32x2 + 160x4 256x6 + 128x8 () ?

ICM 68 – 201
Exercises, I
Problems
1. Make a flow chart for evaluating a polynomial using Horner’s form.
2. Write Maple or Matlab code implementing Horner’s form.
3. How does synthetic division relate to Horner’s form?
4. Write a Maple or Matlab function that performs synthetic division
with a given polynomial at a given value.
5. Calculate the number of additions and multiplications required for
evaluating an nth degree polynomial
a. in standard form. b. in Horner’s form.
c. Look up the sequence {0, 2, 5, 9, 14, 20, 27, 35, 44, . . . } at The On-Line
Encyclopedia of Integer Sequences.

6. Prove that Horner’s form reduces cost from O(n2 ) to O(n).


7. Analyze the reduction of cost when using Horner’s form to evaluate
Chebshev polynomials.
ICM 69 – 201
III. Numerical Di↵erentiation

Sections
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2. Taylor’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3. Di↵erence Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
i. Forward Di↵erences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
ii. Backward Di↵erences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
iii. Centered Di↵erences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

Appendix I: Taylor’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80


Appendix II: Centered Di↵erence Coefficients Chart . . . . . . . . . . . . . 81

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

ICM 70 – 201
III. Numerical Di↵erentiation

What is Numerical Di↵erentiation?


Numerical Di↵erentiation is the approximation of the derivative of a
function at a point using numerical formu-
las, not the algebraic rules for di↵erentia- mT = f (a)

tion. The basic form uses the slope of a P = a, f (a)

short chord rather than the tangent line. mch ⇡ f (a)


f (x)

Since we are subtracting numbers that are


close together, loss of computational pre-
cision can be a serious problem.

Taylor series expansions will be our basic tool for developing formulas and
error bounds for numerical derivatives. The errors will have two main
components: truncation errors from Taylor polynomials and round-o↵
errors from finite-precision floating-point arithmetic.

ICM 71 – 201
Taylor’s Theorem

Definition (Taylor Polynomials (1712/17156 ))


If f has sufficiently many derivatives at x = a, the Taylor polynomial of
degree n (or order n) is
n
f (k) (a)
pn (x) = Â (x a)k
k=0 k!

where f (0) (a) = f (a).

Theorem (Taylor’s Theorem)


Suppose f has n + 1 derivatives on a neighborhood of a. Then
f (x) = pn (x) + Rn (x) where
f (n+1) (c)
Rn (x) = (x a)n+1
(n + 1)!
for some c between x and a.

ICM 72 – 201
6 Actually, discovered by Gregory in 1671, ⇡ 14 years before Taylor was born!
Proving Taylor’s Theorem

Proof. (Taylor’s Theorem — Outline).


Z x a
1. The FToC ) f (x) = f (a) + f 0 (x t) dt.
0
2. Integrate by parts with u = f 0 (x t) and dv = dt:
Z x a
f (x) = f (a) + f 0 (a)(x a) + f 00 (x t) · t dt
0

3. Repeat the process: choose u = f (k) (x t) and dv = t k 1 /(k 1)! to arrive


at

f 00 (a) f (n) (a)


f (x) = f (a) + f 0 (a)(x a) + (x a)2 + · · · + (x a)n + Rn (a, x)
2! n!
where Z x a
1
Rn (a, x) = f (n+1) (x t) · t n dt
n! 0

ICM 73 – 201
Tailored Expressions
Forms of the Remainder
f (n+1) (c)
Lagrange (1797): Rn (x) = (n+1)! (x a)n+1
f (n+1) (c)
Cauchy (1821): Rn (x) = n! (x c)n (x a)
Z x
Integral Form: Rn (x) = 1
n! f (n+1) (t)(x t)n dt
a
|x a|n+1
Uniform Form: |Rn (x)|  (n+1)! · max f (n+1) (x) = O |x a|n+1

Two Useful Taylor Expansions


Set x = a + h in the Taylor polynomial. Then
f (a + h) = f (a) + f 0(a) · h + 2!1 f 00(a) · h2 + 3!1 f 000(a) · h3 + · · · (1)

And now set x = a h. Then


f (a h) = f (a) f 0(a) · h + 2!1 f 00(a) · h2 1
3! f 000(a) · h3 ± · · · (2)
ICM 74 – 201
Forward Di↵erence Approximation
Forward Di↵erence
Subtract f (a) from both sides of Eq (1), then divide by h to obtain:

f (a + h) f (a) O(h2 )
= f 0(a) +
h h
The Forward Di↵erence Formula is
f (a + h) f (a)
f 0(a) = + O(h) (FD)
h

Examples
1. Suppose f (x) = 1 + x esin(x) . For a = 0 and h = 0.1, we have
f 0 (x) ⇡ (1.1105 1.0000)/0.1 = 1.1050

2. Suppose P0 = (1.000, 3.320) and P1 = (1.100, 3.682). Then


f 0 (x) ⇡ (3.682 3.320)/(1.100 1.000) = 3.620
ICM 75 – 201
Backward Di↵erence Approximation
Backward Di↵erence
Subtract f (a) from both sides of Eq (2), then divide by h to obtain:

f (a h) f (a) O(h2 )
= f 0(a) +
h h
The Backward Di↵erence Formula is
f (a) f (a h)
f 0(a) = + O(h) (BD)
h

Examples
1. Again, suppose f (x) = 1 + x esin(x) . For a = 0 and h = 0.1, we have
f 0 (x) ⇡ (1.0000 0.910)/0.1 = 0.900

2. Suppose P0 = (1.000, 3.320) and P1 = (0.900, 2.970). Then


f 0 (x) ⇡ (3.320 2.970)/(1.000 0.900) = 3.500
ICM 76 – 201
Centered Di↵erence Approximation
Centered Di↵erence
Subtract O(h3 ) versions of Eqs (1) and (2).

f (a + h) = f (a) + f 0(a) · h + 2!1 f 00(a) · h2 + O h3


f (a h) = f (a) f 0(a) · h + 2!1 f 00(a) · h2 + O h3
f (a + h) f (a h) = 2 f 0(a) · h + O(h3 )

Solve for f 0(a) to obtain:


The Centered Di↵erence Formula is
f (a + h) f (a h)
f 0(a) = + O(h2 ) (CD)
2h

Example
1. Once more, suppose f (x) = 1 + x esin(x) . For a = 0 and h = 0.1, we
have
f 0 (x) ⇡ (1.110 0.910)/0.2 = 1.000
ICM 77 – 201
The Chart
A Table of Di↵erences From a Function
Let f (x) = 1 + x esin(x) and a = 1.0. Then
f 0 (1) = esin(1) (1 + cos(1)) ⇡ 3.573157593

h FD(h) error BD(h) error CD(h) error

1/21 3.494890 0.078268 3.024408 0.548750 3.259649 0.313509


1/22 3.636316 0.063158 3.347764 0.225394 3.492040 0.081118
1/23 3.628464 0.055306 3.476944 0.096214 3.552704 0.020454
1/24 3.606368 0.033210 3.529696 0.043462 3.568032 0.005126
1/25 3.591104 0.017946 3.552640 0.020518 3.571872 0.001286
1/26 3.582464 0.009306 3.563200 0.009958 3.572832 0.000326
1/27 3.577600 0.004442 3.568256 0.004902 3.572928 0.000230
1/28 3.575296 0.002138 3.570688 0.002470 3.572992 0.000166

ICM 78 – 201
Another Chart
A Table of Di↵erences From Data
Estimate the derivatives of a function given the data below (h = 0.4).

xi 2.00 1.60 1.20 0.80 0.40 0.00 0.40 0.80 1.20 1.60 2.00
yi 1.95 0.29 0.56 0.81 0.65 0.30 0.06 0.21 0.04 0.89 2.55

Forward Di↵erences
xi 2.00 1.60 1.20 0.80 0.40 0.00 0.40 0.80 1.20 1.60 2.00
yi 4.16 2.12 0.625 0.375 0.90 0.90 0.375 0.625 2.12 4.16 ./

Backward Di↵erences
xi 2.00 1.60 1.20 0.80 0.40 0.00 0.40 0.80 1.20 1.60 2.00
yi ./ 4.16 2.12 0.625 0.375 0.90 0.90 0.375 0.625 2.12 4.16

Centered Di↵erences
xi 2.00 1.60 1.20 0.80 0.40 0.00 0.40 0.80 1.20 1.60 2.00
yi ./ 3.138 1.374 0.125 0.637 0.90 0.6375 0.125 1.374 3.138 ./

Actual Derivatives (from the function’s formula — a cubic polynomial)


xi 2.00 1.60 1.20 0.80 0.40 0.00 0.40 0.80 1.20 1.60 2.00
yi 5.325 3.057 1.293 0.033 0.723 0.975 0.723 0.033 1.293 3.057 5.325

ICM 79 – 201
Appendix I: Taylor’s Theorem
Methodus Incrementorum Directa et Inversa (1715)1
In 1712, Taylor wrote a letter containing his theorem without proof to
Machin. The theorem appears with proof in Methodus Incrementorum as
Corollary II to Proposition VII. The proposition is a restatement of
“Newton’s [interpolation] Formula.” Maclaurin introduced the method
(undet coe↵s; order of contact) we use now to present Taylor’s theorem in
elementary calculus classes in A Treatise of Fluxions (1742) §751.

Corollary (Maclaurin’s Corollary II (pg 23))


If for the evanescent increments, the fluxions that are proportional to
00 0
them are written, the quantities v, v, v, v, v, &c. being now made all
0 00
equal to the time z uniformly flows to become z + v, then x will become
v v2 v3
x + ẋ + ẍ 2
+ ẍ˙ + &c.
1 ż 1 . 2 ż 1 . 2 . 3 ż3

• Investigate the connection between synthetic division and Taylor expansions.


ICM 80 – 201
1 See Ian Bruce’s annotated translation.
Appendix II: Centered Di↵erence Coefficients Chart
Centered Finite Di↵erence Formula Coefficients1
Derivative O(ha ) x 4h x 3h x h 2h x x x+h x + 2h x + 3h x + 4h
2 1/2 0 1/2
1 4 1/12 2/3 0 2/3 1/12
6 1/60 3/20 3/4 0 3/4 3/20 1/60
8 1/280 4/105 1/5 4/5 0 4/5 1/5 4/105 1/280
2 1 2 1
2 4 1/12 4/3 5/2 4/3 1/12
6 1/90 3/20 3/2 49/18 3/2 3/20 1/90
8 1/560 8/315 1/5 8/5 205/72 8/5 1/5 8/315 1/560
2 1/2 1 0 1 1/2
3 4 1/8 1 13/8 0 13/8 1 1/8
6 7/240 3/10 169/120 61/30 0 61/30 169/120 3/10 7/240
2 1 4 6 4 1
4 4 1/6 2 13/2 28/3 13/2 2 1/6
6 7/240 2/5 169/60 122/15 91/8 122/15 169/60 2/5 7/240

E.g., the third derivative’s centered di↵erence approximation with second-order accuracy is
1
f (x0 2h) + 1 f (x0 h) + 0 f (x0 ) 1 f (x0 + h) + 12 f (x0 + 2h)
f (3)(x0 ) ⇡ 2
+ O h2 .
h3

1 SeeFornberg, “Generation of Finite Di↵erence Formulas on Arbitrarily Spaced


ICM 81 – 201
Grids,” Math of Comp 51 (184), pp 699–706.
Exercises, I

Problems
1. Show that the centered di↵erence formula is the average of the forward
and backward di↵erence formulas.
2. Explain why the centered di↵erence formula is O(h2 ) rather than O(h).
3. Add O(h4 ) versions of Eqs (1) and (2) to find a centered di↵erence
approximation to f 00 (a).
4. Investigate the ratio of error in the function’s di↵erence chart as h is
successively divided by 2 for
a. forward di↵erences
b. backward di↵erences
c. centered di↵erences
5. Examine the ratios of error to h in the data di↵erence chart for
a. forward di↵erences
b. backward di↵erences
c. centered di↵erences

ICM 82 – 201
Exercises, II
Problems
6. Derive the 5-point di↵erence formula for f 0 (a) by combining Taylor
expansions to O(h5 ) for f (a ± h) and f (a ± 2h).
7. Write a Maple or Matlab function that uses the backward di↵erence
formula (BD) in Euler’s method of solving di↵erential equations.
8. Collect the temperatures (with a CBL) in a classroom from 8:00 am
to 6:00 pm.
a. Estimate the rate of change of temperatures during the day.
b. Compare plots of the rates given by forward, backward, and centered
di↵erences.
9. a. Find Taylor expansions for sin and cos to O(x6 ). Estimate cos(1.0).
d
b. Since dx sin(x) = cos(x), we can estimate cos with the derivative of
sin. Use your expansion of sin and h = 0.05 to approximate cos(1.0)
with
i. forward ii. backward iii. centered
di↵erences di↵erences di↵erences
Discuss the errors.
ICM 83 – 201
IV. Root Finding Algorithms

Sections
1. The Bisection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85

2. Newton-Raphson Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3. Secant Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4. Regula Falsi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

5. Halley’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101


Appendix III: Rate of Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Links and Others . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

ICM 84 – 201
IV. Root Finding Algorithms: The Bisection Method

The Bisection Method


If a continuous function f has a root r in an
interval, then r is in either the interval’s left f (x)

half or right half. Suppose r is in the right a


r b

half interval. Then r must be in either this


smaller interval’s left half or right half. Find
which half and continue the procedure.
This process depends on the Intermediate Value Theorem

Theorem (Bolzano’s Intermediate Value Theorem (1817))


Let f be continuous on [a, b]. Suppose that y⇤ is between f (a) and f (b).
Then there is a point c 2 (a, b) such that f (c) = y⇤ .
In particular, if f (a) · f (b) < 0, then f has a root r with a < r < b.

ICM 85 – 201
The Bisection Error
Theorem (Bisection Algorithm)
Let [a, b] be an interval on which f changes sign. Define
xn = cn = 12 (an 1 + bn 1 )

with [an , bn ] chosen by the algorithm. Then f has a root a 2 [a, b], and
1 n
|a xn |  (b a) · 2

For an error tolerance e > 0, set


⇠ ⇡
log(b a) log(e)
n=
log(2)
to obtain |a xn |  e. (This is called linear convergence.)

Theorem (Cauchy’s Bound for Real Roots7 (1829))


Suppose that r is a root of p(x) = xn + an 1 xn 1 +···+a .
0 Let
M = maxk=0..n 1 |ak |. Then |r|  M + 1.

ICM 86 – 201
1 A-L Cauchy, Exercices de mathématiques, De Bure frères, Paris (1829).
The Bisection Method Algorithm

Algorithm (Bisection Method (Basic Outline))


Given f and [a, b].
1. Set k = 0 and [a0 , b0 ] = [a, b].

2. Calculate c = 12 (ak + bk )

3. if f (c) ⇡ 0, then c is a root; quit

4. if f (c) · f (ak ) < 0, then set [ak+1 , bk+1 ] = [ak , c]

5. else if f (c) · f (bk ) < 0, then set [ak+1 , bk+1 ] = [c, bk ]

6. Set k = k + 1 and (if k isn’t too big) go to 2.

ICM 87 – 201
The Bisection Method Pseudocode

input : a, b, eps
extern: f(x)
1: fa f(a);
2: fb f(b);
3: if fa ·fb >0 then
4: stop ; /* Better: sign(fa) 6= sign(fb) */
5: n ceiling((log(b a) log(eps))/log(2));
6: for i 1 to n do
7: c a + 0.5·(b a);
8: fc f(c);
9: if abs(fc)<eps then
return: c
10: if fa ·fc <0 then
11: b c;
12: fb fc;
13: else
14: if fa ·fc >0 then
15: a c;
16: fa fc;
17: else
return: c

return: c

ICM 88 – 201
Newton-Raphson Method
Newton-Raphson Method8
If a function f is ‘nice’, use the tan-
3

gent line to approximate f . The


2
x0 = 2.6

root of the tangent line — easy to


1

x1 = 3.56

find — approximates the root of f . -1.6 -0.8 0 0.8 1.6 2.4 3.2 4 4.8 5.6

-1

-2

1. f (x) = f (a) + f 0 (a)(x a) -3

2. Set f (x) = 0; solve for x: 3

2
x0 = 2.6
f (a)
x=a 1

f 0 (a) -1.6 -0.8 0 0.8 1.6


= 3.199645
2.4
x3 = 3.267
3.2
x1 = 3.56
4 4.8 5.6

f (x)
-1

3. Set N(x) = x f 0 (x) . Then -2


x2 = 1.98

-3

xn+1 = N(xn )

8 The general method we use was actually developed by Simpson; Newton worked
ICM 89 – 201
with polynomials; Raphson iterated the formula to improve the estimate of the root.
Newton’s Method Error

Theorem
Let f 2 C 2 (I) on some interval I ⇢ R. Suppose a 2 I is a root of f .
Choose x0 2 I and define
f (xn )
xn+1 = xn .
f 0 (xn )
Then
|a xn+1 |  M · |a xn |2
or, with en = |a xn |,
en+1  M · en2
where M is an upper bound for 12 | f 00 (x)/ f 0 (x)| on I.

This is called quadratic or “order 2” convergence.

ICM 90 – 201
Newton’s Method Algorithm

Algorithm (Newton’s Method (Basic Outline))


Given f and x0 .
f (x)
1. Set k = 1 and N(x) = x
f 0 (x)

2. Compute xk = N(xk 1 ).

3. If f (xk ) ⇡ 0, then xk is a root; quit

4. else if | f (xk )| or |xk xk 1| is very small, then xk ⇡ a root; quit

5. Set k = k + 1 and (if k isn’t too big) go to 2.

ICM 91 – 201
Newton’s Method Pseudocode

input : x0, eps, n


extern: f(x), df(x)=f0 (x)

1: fx0 f(x0);
2: dfx0 df(x0);
3: for i 1 to n do
4: x1 x0 fx0/dfx0;
5: fx1 f(x1);
6: if |fx1| + |x1 x0| <eps then
7: return: x1
8: x0 x1;
9: fx0 fx1;
10: dfx0 df(x0);
return: x1

ICM 92 – 201
Secant Method
Secant Method
Newton’s method requires evaluating the derivative — this can be from
difficult to impossible in practice. Approximate the derivative in Newton’s
method with a secant line9 :
f (xn )
xn+1 = xn f (x ) f (x )
n n 1
xn xn 1
xn xn 1
= xn f (xn ) ·
f (xn ) f (xn 1)
3.2 3.2

2.4 2.4

1.6 1.6

Newton’s Method Secant Method

0.8 0.8

0.5 1 1.5
x1 2 2.5
x0
3 3.5 4 0.5 1 1.5
x2 2
x1
2.5
x0
3 3.5 4

-0.8 -0.8

9 Historically, the methods developed the opposite way: Viète used discrete steps of 10 k

(1600); Newton used secants (1669), then ‘truncated power series’ (1687); Simpson used ICM 93 – 201
fluxions/derivatives (1740) with ‘general functions.’
Secant Method Error

Theorem
Let f 2 C 2 (I) for some interval I ⇢ R. Suppose a 2 I is a root of f .
Choose x0 and x1 2 I, and define
xn xn 1
xn+1 = xn f (xn ) · .
f (xn ) f (xn 1 )
Then
|a xn+1 |  M · |a xn | · |a xn 1 |
or, with en = |a xn |,
en+1  M · en · en 1
1 00 0
where M is an upper bound for 2 | f (x)/ f (x)| on I.

p
1+ 5
This is superlinear convergence of “order 1.6”. (Actually, it’s order 2 )

ICM 94 – 201
Secant Method Algorithm

Algorithm (Secant Method (Basic Outline))


Given f , x0 , and x1 :
x1 x0
1. Set k = 2 and S(x0 , x1 ) = x1 f (x1 ) ·
f (x1 ) f (x0 )

2. Compute xk = S(xk 1 , xk 2 ).

3. If f (xk ) ⇡ 0, then xk is a root; then quit

4. else if |xk xk 1| is very small, then xk ⇡ a root; quit

5. Set k = k + 1 and (if k isn’t too big) go to 2.

ICM 95 – 201
Secant Method Pseudocode

input : x0, x1, eps, n


extern: f(x)
1: f0 f(x0);
2: f1 f(x1);
3: for i 1 to n do
4: c x1 f1· (x1 x0) / (f1 f0);
5: fc f(c);
6: x0 x1; /* update parameters */
7: x1 c;
8: f0 f1;
9: f1 fc;
10: if |x1 x0 | < eps then
return: x1 ; /* Or: |fc|< eps */

return: x1

ICM 96 – 201
Regula Falsi
Regula Falsi
The regula falsi, or ‘false position,’ method10 is very old; the Egyptians
used the concept. The method appears in the Vaishali Ganit (India, 3rd
century BC), Book on Numbers and Computation & Nine Chapters on
the Mathematical Art (China, 2nd century BC), Book of the Two Errors
(Persia, c 900), and came to the west in Fibonacci’s Liber Abaci (1202).
Regula falsi combines the secant and bisection techniques: Use the
secant to find a “middle point,” then keep the interval with a sign
change, i.e., that brackets the root.

x0 x1
1.6 1.6

0.8 0.8

c c
-1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4

x1
-0.8 -0.8

x2
-1.6 -1.6

10 Also called Modified Regula Falsi, Double False Position, Regula Positionum, Secant Method,
ICM 97 – 201
Rule of Two Errors, etc. My favourite name is Yı́ng bù zú: ‘Too much and not enough.’
Regula Falsi Method Error

Theorem
Let f 2 C 2 (I) for some interval I ⇢ R. Suppose a 2 I is a root of f .
Choose a and b 2 I such that sign( f (a)) 6= sign( f (b)), and define
b a
c=b f (b) · .
f (b) f (a)
Then
|a c|  M · |a a|
or, with en = |a xn |,
en+1  M · en
where 0 < M < 1 is a constant depending on | f 00 (x)| and | f 0 (x)| on I.

This is linear or “order 1” convergence. (The same as the bisection method.)

ICM 98 – 201
Regula Falsi Algorithm

Algorithm (Regula Falsi (Basic Method))


Given f , a, and b:
b a a· f (b) b· f (a)
1. Set k = 1 and S(a, b) = b f (b) · =
f (b) f (a) f (b) f (a)

2. Compute c = S(a, b)
3. If f (c) ⇡ 0, then c is a root; quit
4. If f (c)· f (a) < 0, then b c
5. else a c

6. If |b a| is very small compared to |a|, then a is a root; quit


7. Set k = k + 1, and (if k isn’t too big) go to 2

ICM 99 – 201
Regula Falsi Pseudocode

input : a, b, eps, n
extern: f(x)
1: fa f(a);
2: fb f(b);
3: if fa ·fb >0 then
4: stop ; /* Better: sign(fa) 6= sign(fb) */
5: for i 1 to n do
6: c (a·fb b·fa)/(fb fa) ; /* Better: c b - fb*(b-a)/(fb-fa) */
7: fc f(c);
8: if |fc | <eps then
return: c
9: if fa ·fc <0 then
10: b c;
11: fb fc;
12: else
13: if fb ·fc <0 then
14: a c;
15: fa fc;
16: else
return: c

return: c

ICM 100 – 201


Halley’s Method

Halley’s Method
Halley’s method11 of 1694 extends Newton’s method to obtain cubic
convergence. Halley was motivated by de Lagny’s 1692 work showing algebraic
formulas for extracting roots. Using the quadratic term in Taylor’s formula
obtains the extra degree of convergence. Just as Newton did not recognize the
derivative appearing in his iterative method, Halley also missed the connection
to calculus — it was first seen by Taylor in 1712. The version of Halley’s
method we use comes from applying Newton’s method to the auxiliary function
f (x)
g(x) = p .
| f 0 (x)|
Then the iterating function for Halley’s method is

2 f (xn ) f 0 (xn )
xn+1 = xn .
2 f 0 (xn )2 f (xn ) f 00 (xn )

11 For
historical background and a nice development, see T. Scavo and J. Thoo, “On
ICM 101 – 201
the Geometry of Halley’s Method,” Am. Math. Mo., Vol. 102, No. 5, pp. 417-426.
Halley’s Method Error

Theorem
Let f 2 C 3 (I) on some interval I ⇢ R. Suppose a 2 I is a root of f .
Choose x0 2 I and define
2 f (xn ) f 0 (xn )
xn+1 = xn .
2 f 0 (xn )2 f (xn ) f 00 (xn )
Then
|a xn+1 |  M · |a xn |3
or, with en = |a xn |,
en+1  M · en3
3 f 00 (x)2 2 f 0 (x) f 000 (x)
where M is an upper bound for 12 f 0 (x)2
on I.

This is called cubic or “order 3” convergence.


ICM 102 – 201
Halley’s Method Algorithm

Algorithm (Halley’s Method (Basic Outline))


Given f and x0 .
0. Compute f 0 and f 00

2 f (x) f 0 (x)
1. Set k = 1 and H(x) = x
2 f 0 (x)2 f (x) f 00 (x)

2. Compute xk = H(xk 1 ).

3. If f (xk ) ⇡ 0, then xk is a root; quit

4. else if | f (xk )| or |xk xk 1| is very small, then xk ⇡ a root; quit

5. Set k = k + 1 and (if k isn’t too big) go to 2.

ICM 103 – 201


Halley’s Method Pseudocode

input : x0, eps, n


extern: f(x), df(x)=f0 (x), ddf(x)=f00 (x)

1: fx0 f(x0);
2: dfx0 df(x0);
3: ddfx0 ddf(x0);
4: for i 1 to n do
5: x1 x0 2·fx0·dfx0 / (2·dfx0ˆ2 fx0·ddfxo);
6: fx1 f(x1);
7: if |fx1| + |x1 x0| <eps then
8: return: x1
9: x0 x1;
10: fx0 f(x1);
11: dfx0 df(x1);
12: ddfx0 ddf(x1);
return: x1

ICM 104 – 201


A Sample Problem

Polynomial Root
⇥1 3

Find the real root of f (x) = x11 + x2 + x + 0.5 in 2, 2 . (r = 1.098282972)
(This is f ’s only real root.)

Bisection Newton Secant Regula Falsi Halley


2 32 3 2 32 3
[0.5000, 1.5000] 0.5000 1.500 [+0.500, +1.500] [0.500, 1.5] 0.5000 1.500
6 76 7 6 76 7
[1.0000, 1.5000] 6 0.12807 61.3707 [+0.515, +0.500] [0.515, 1.5] 6 0.37527 61.2687
6 76 7 6 76 7
[1.0000, 1.2500] 6 0.65007 61.2587 [ 0.124, +0.515] [0.530, 1.5] 6 0.05387 61.1287
6 76 7 6 76 7
6 76 7 6 76 7
[1.0000, 1.1250] 6 0.02367 61.1717 [ 0.406, 0.124] [0.545, 1.5] 6 1.20807 61.0997
6 76 7 6 76 7
[1.0625, 1.1250] 6 0.52417 61.1197
7 6 [ 0.956, 0.406] [0.561, 1.5] 6 0.98127 61.0987
7 6
6 7 6 7
6 76 7 6 76 7
[1.0938, 1.1250] 6 3.315 7 61.1007 [ 0.230, 0.956] [0.576, 1.5] 6 0.65567 61.0987
6 76 7 6 76 7
[1.0938, 1.1094] 6 3.014 7 61.0987
7 6 [+0.085, 0.230] [0.592, 1.5] 6 0.98227 6 · 7
7 6
6 7 6 7
6 76 7 6 76 7
[1.0938, 1.1016] 6 2.740 7 61.0987 [ 0.607, +0.085] [0.607, 1.5] 6 0.65887 6 · 7
6 76 7 6 76 7
[1.0977, 1.1016] 6 2.491 7 6 · 7
7 6 [ 1.171, 0.607] [0.623, 1.5] 6 0.99347 6 · 7
7 6
6 7 6 7
6 76 7 6 76 7
[1.0977, 1.0996] 4 2.264 5 4 · 5 [ 0.583, 1.170] [0.639, 1.5] 4 0.68465 4 · 5
[1.0977, 1.0986] 2.059 · [ 0.558, 0.583] [0.654, 1.5] 1.085 ·

ICM 105 – 201


The Sample Problem’s Graph

0 0.25 0.5 0.75 1 1.25 1.5 1.75 2

-1

-2

A plot of f (x) = x11 + x2 + x + 0.5

ICM 106 – 201


je Chartes

Method Type Update Function


Bisection Bracketing (2 pts) B(a, b) = 12 (a + b)
a f (b) b f (a)
Regula Falsi Bracketing (2 pts) R(a, b) = f (b) f (a)
x f (x ) x f (x )
Secant method Approximating (1 pt) S(xn , xn 1 ) = n 1f (x n) f (xn )n 1
n n 1
f (x )
Newton’s method Approximating (1 pt) N(xn ) = xn f 0 (xn )
n
2 f (x ) f 0 (x )
Halley’s method Approximating (1 pt) H(xn ) = xn 2 f 0 (x )2 n f (x )nf 00 (x )
n n n

Computation
Method Error Convergence Speed
Cost
Bisection en+1  12 en Linear (order 1) Low
Regula Falsi en+1  C en Linear (order 1) Medium
Secant method en+1  C en en 1 Superlinear (order ⇡ 1.6) Medium
Newton’s method en+1  C en2 Quadratic (order 2) High
Halley’s method en+1  C en3 Cubic (order 3) Very High
ICM 107 – 201
Appendix III: Rate of Convergence

Definition (Rate of Convergence)


Let xn ! x⇤ and set en = |x⇤ xn |. Then xn converges to x⇤ with rate r i↵
there is a positive constant C (the asymptotic error constant) such that
en+1
lim r = C
n!• en

Terminology
Rate Parameters
Sublinear r = 1 and C = 1
Linear r = 1 and 0 < C < 1
Superlinear r>1
Quadratic r=2
Cubic r=3
NB: Quadratic and cubic are special cases of superlinear convergence.
ICM 108 – 201
Exercises, I
Exercises
For each of the functions given in 1. to 7. below:
a. Graph f in a relevant window.
b. Use Maple’s fsolve to find f ’s root to 10 digits.
c. Use each of the five methods with a maximum of 15 steps filling in the table:
Method Approx Root Relative Error No. of Steps

1. The ‘Newton-era’ test function 4. h(x) = x 8e x

T (x) = x3 2x 5.
30x 31
1 1 5. R(x) =
2. f (x) = 13 7x x11 29(x 1)
Z x
3. g(x) = sin(t 2 /2) dt 1 sin(x2 ) + 1
6. S(x) = for x 2 [0, 4]
0 cos(x) + 2
✓ ◆
x· ln(x2 )
7. The intransigent function y(x) = 10 · e 1+ x

8. Explain why the bisection method has difficulties with two roots in an interval.

ICM 109 – 201


Exercises, II
Exercises
For Exercises 9. to 15., generate your personal polynomial p(x) in Maple by entering:
> randomize(your phone number, no dashes or spaces):
deg := 1+2*rand(4..9)():
p := randpoly(x, degree=deg, coeffs=rand(-2..2)):
p := unapply(sort(p), x);
9. Use fsolve to find the roots of your polynomial p(x).
10. Compare the results of the four root finding methods applied to p(x).
11. Report on stopping conditions: function value, step/interval size, maximum
number of iterations.
12. Find any bad initial points for Newton’s method for p(x).
13. Verify Fibonacci’s root12 x = 1.368808107 of the equation x3 + 2x2 + 10x = 20.
14. Determine the convergence rates of the following sequences.
a. 2 n b. 1 + 2(1 2n) c. (n + 1)/n d. sin(k)/k
15. Solve this problem from The Nine Chapters on the Mathematical Art (c 200 BCE):
“Now an item is purchased jointly; everyone contributes 8 coins, the excess is 3;
everyone contributes 7, the deficit is 4. Tell: The number of people, the item
price, what is each?”

ICM 110 – 201


12 Fibonacci found this root in 1225 — no one knows how he did it!
Exercises, III
Exercises
p
16. Set f (x) = x3 · ex 2 and x0 = 1.0.
a. Plot f over 0  x  1 to see the root.
b. Use Newton’s method to find the root; print xn for each iteration.
c. Alter Newton’s method:
i. Replace f 0 (x) in Newton’s method with a centered di↵erence formula
approximation Eq (CD).
ii. Execute the altered Newton’s method beginning with h = 1.0.
iii. Decrease h by 0.1 and rerun the altered Newton’s method.
iv. Keep decreasing h until the results are very close to Newton’s
method.
v. Determine practical reasons for using this modification rather than
the original method.

17. [Group Exercise] Redo the previous problem finding all roots in
[ 1, 1] of the 8th Chebyshev polynomial
T8 (x) = 128 x8 256 x6 + 160 x4 32 x2 + 1

ICM 111 – 201


Links and Others
More information:

Dr John Matthews’ modules: Wikipedia entries:


• The Bisection Method • Bisection Method
• Newton’s Method • Newton’s Method
• The Secant Method • Secant Method
• The Regula Falsi Method • Regula Falsi Method

See also: Interactive Educational Modules in Scientific Computing (U of I)


and MathWorld (Wolfram Research)
Investigate:
• Müller’s method • Splitting Circle method
• Brent’s method • Maple’s fsolve
• Bernoulli’s method
• Matlab’s fzero
• Jenkins-Traub method
• Laguerre’s method • Wilkinson’s Perfidious Polynomial:
20
• Durand-Kerner method
w(x) := ’ (x k)
• Fibonacci Search method k=1

ICM 112 – 201


S II. Special Topics: Modified Newton’s Method

Sections
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
2. Modified Newton’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

ICM 113 – 201


II. Special Topic: Modified Newton’s Method

Newton’s Method Revisited


Newton’s method uses the iteration function
f (x)
N(x) = x .
f 0 (x)

A fixed point of N, that is a value x⇤ where N(x⇤ ) = x⇤ , is a zero of f (x).


It was really Simpson who realized the connection of Newton’s method
with calculus; Newton had developed an algebraic method in 1669 (not
publishing it until 1711); Simpson’s generalized version appeared in 1740
in his text A New Treatise of Fluxions. In 1690, midway between Newton
and Simpson, Raphson published a simplified version of Newton’s method
that was based on iterations, much like ours today.

ICM 114 – 201


Modified Newton’s Method
Modified Newton’s Method
To modify Newton’s method, replace the “correcting factor” quotient
f / f 0 with f 0 / f 00 . Our new iterator is
f 0 (x)
M(x) = x .
f 00 (x)

Choose an initial value x0 . Then calculate the values xn+1 = M(xn ) for
n = 1, 2, . . . . The question is: Does xn have a limit?

Convergence?
Use Newton’s example function: y = x3 2x 5. Then
3x2 2
M(x) = x
6x
Starting with x0 = 1 gives the sequence x1 = 0.83̄, x2 = 0.816̄,
x3 = 0.81649659, x4 = 0.81649658. Where these points are going?
ICM 115 – 201
Exercises

Problems
1. Use Maple to generate a random polynomial with:
> randomize(your phone number, no dashes or spaces):
deg := 1+2*rand(4..9)():
p := randpoly(x, degree=deg, coeffs=rand(-2..2)):
p := unapply(sort(p), x);
2. Apply the Modified Newton’s Method to your polynomial with a
selection of starting points.
3. Produce a chart of your intermediate results.
4. Graph your polynomial and the trajectories using your data chart.
5. Can you determine a specific value where the trajectories change
from one to another target point?

ICM 116 – 201


V. Numerical Integration
Sections
1. Numerical Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

2. Left Endpoint, Right Endpoint, and Midpoint Sums . . . . . . . . . . . .119

3. Trapezoid Sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

4. Simpson’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

5. Gaussian Quadrature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

6. Gauss-Kronrod Quadrature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

7. A Menagerie of Test Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

Appendix IV: Legendre & Stieltjes Polynomials for GK7,15 . . . . . . 137

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

Links and Others . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139


ICM 117 – 201
V. Numerical Integration

What is Numerical Integration?


Numerical integration or (numerical) quadrature 2.5

is the calculation of a definite integral using 2

1.5

numerical formulas, not the fundamental theo- 1

0.5

rem. The Greeks studied quadrature: given a -1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5

figure, construct a square that has the same -0.5

-1

area. The two most famous are Hippocrates


of Chios’ Quadrature of the Lune (c. 450 BC)
and Archimedes’ Quadrature of the Parabola
(c. 250 BC). Archimedes used the method of ex-
haustion — a precursor to calculus — invented
by Eudoxus.
Squaring the circle is one of the classical problems of constructing a
square with the area of a given circle – it was shown impossible by
Lindemann’s theorem (1882).13

was the first to prove that p is transcendental.


ICM 118 – 201
13 Lindemann
Methods of Elementary Calculus
Rectangle Methods
Left endpoint sum Midpoint sum Right endpoint sum

n n n
An ⇡ Â f (xk 1 ) Dxk An ⇡ Â f (mk ) Dxk An ⇡ Â f (xk ) Dxk
k=1 k=1 k=1
mk = 12 (xk 1 + xk )

(b a)2 (b a)3 (b a)2


en  2 · M1 · 1n en  24 · M2 · n12 en  2 · M1 · 1n
Z p
where Mi = max f (i) (x) f (x) dx = 2 ⇡ [1.984, 2.008, 1.984]n=10
0
ICM 119 – 201
Trapezoid Sums

Trapezoid Sums
Instead of the degree 0 rectangle approximations to the
function, use a linear degree 1 approximation. The area
of the trapezoid is given by
AT = 12 [ f (xk 1) + f (xk )] Dxk
This gives an approximation for the integral
Z b n

a
f (x) dx ⇡ Â 12 [ f (xk 1) + f (xk )] Dxk
k=1
[Midpoint: measure height at average x v. trapezoid: average the height measures]
The formula is often written as
" ! #
n 1
Dxk
Tn ⇡ f (x0 ) + 2 Â f (xk ) + f (xn )
k=1 2

Error for the trapezoid rule is


(b a)3 1
en  · M2 · 2
12 n
ICM 120 – 201
Sample Trapezoid

Example
Let f (x) = sin(x) + 12 sin(2x) 1
4
1
sin(4x) + 16 sin(8x) over [0, p].

With an equipartition,

Dx = p/10 ⇡ 0.314

Then
" ! #
10
Dx
T10 = f (0)+ 2 Â f ( 9k p) + f (p)
k=1 2

which gives

T10 = 1.984

with absolute error of 0.016.

ICM 121 – 201


Simpson’s Rule

Simpson’s Rule
We now move to a degree 2 approximation. The
easiest way to have 3 data pts is to take the pan-
els in pairs: instead of rectangle base [xi , xi+1 ],
use [xi , xi+1 , xi+2 ]. So we require an even number
of panels. The area under the parabola is
h i
AS = 13 f (xi ) + 4 f (xi+1 ) + f (xi+2 ) Dx

This gives a 2n-panel approximation


" for the integral #
Z n
b Dx
a
f (x) dx ⇡ Â f (x2k 2 ) + 4 f (x2k 1 ) + f (x2k )
3
k=1
most often written as
Dx
S2n = [ f (x0 ) + 4 f (x1 ) + 2 f (x2 ) + 4 f (x3 ) + · · · + 4( f (x2n 1) + f (x2n )]
3
The error is bounded by (b a)5 1
en  · M4 · 4
180 n
ICM 122 – 201
Sample Simpson

Example
Let f (x) = sin(x) + 12 sin(2x) 1
4
1
sin(4x) + 16 sin(8x) over [0, p].

With a 10 panel equipartition,

Dx = p/10 ⇡ 0.3141592654

Then, with yi = f (xi ),

S10 = 13 [y0 +4y1 +2y2 +· · ·+4y9 +y10 ]Dx

which gives

S10 = 2.000006784

with absolute error of 6.78 · 10 6.

ICM 123 – 201


A Maple Comparison

Approximating a Difficult Integral


Z 2 4
10
Consider 1 2
dx. This integrand has a sharp peak at p/2.
2 p) + 10
1 (x 8

The exact value of the integral (using the FToC ) is


arctan(5 · 103 · (4 p)) arctan(5 · 103 · (2 p)) ⇡ 3.1411844701381

Maple gives
n = 50 n = 500 n = 5000
2 3 2 3 2 3
left 0.0497133 left 0.541336 left 3.42282
6 right 0.04971807 6 right 0.5413367 6 right 3.422827
6 7 6 7 6 7
6 midpoint 3.12102007 6 midpoint 4.0520107 6 midpoint 2.882437
6 7 6 7 6 7
4trapezoid 0.04971575 4trapezoid 0.5413365 4trapezoid 3.422825
Simpson 2.0972500 Simpson 2.881790 Simpson 3.06256

To achieve a relative error below 1% requires approximately n 6000.


ICM 124 – 201
The Chart

Quadrature “Height” Error Bound 14

(b a)2
Left end point f (xi ) 2 · M1 · 1n = O(h)

(b a)2
Right end point f (xi+1 ) 2 · M1 · 1n = O(h)

f (xi ) + f (xi+1 ) (b a)3


Trapezoid Rule 12 · M2 · n12 = O(h2 )
2
⇣ ⌘
(b a)3
Midpoint f xi +x2 i+1 24 · M2 · n12 = O(h2 )

f (xi ) + 4 f (xi+1 ) + f (xi+2 ) (b a)5


Simpson’s Rule 180 · M4 · n14 = O(h4 )
3

where Mi max f (i) (x) and h = 1/n.

ICM 125 – 201


14 An approximation without an error bound has little to no value.
Gaussian Quadrature

Johann Carl Friedrich Gauss


About 1815, while Gauss was finishing constructing an astronomical observa-
tory, he wrote a paper15 on approximating integrals. Gauss’s technique was
studied and extended by Christo↵el in 1858. There are several good ways to
develop this method. We’ll use the easiest . . .

In Search of Improvements
Write the rules we’ve seen as sums:
Left endpt: Ln = 1n f (x0 ) + 1n f (x1 ) + · · · + 1n f (xn 1)
1 1 1
Right endpt: Rn = n f (x1 ) + n f (x2 ) + · · · + n f (xn )
1 1 1
Midpoint: Mn = n f (xm1 ) + n f (xm2 ) + · · · + n f (xmn )
1 1 1 1
Trapezoid: Tn = 2n f (x0 ) + n f (x1 ) + · · · + n f (xn 1 ) + 2n f (xn )
1 4 2 4 1
Simpson’s: Sn = 3n f (x0 ) + 3n f (x1 ) + 3n f (x2 ) + · · · + 3n f (xn 1 ) + 3n f (xn )

15
“Methodus nova integralium valores per approximationem inveniendi,”
ICM 126 – 201
Comment Soc Regiae Sci Gottingensis Recentiores, v. 3, 1816.
Patterns
Observations
• Each of the formulas has the same form: a weighted sum
An = w1 · f (x1 ) + w2 · f (x2 ) + · · · + wn · f (xn )
with di↵erent sets of weights wi and di↵erent sets of nodes xi .

• Any closed interval can be mapped to and from [ 1, 1], so we can


Z 1
⇥ ⇤
focus just on f (x) dx. T (t) = 2 bt a
a 1; T 1 (t) = a 1 t
2 +b 1+t
2
1

• Gauss posed the question: Is there a “best choice” of weights {wi }


and nodes {xi }? Do nodes have to be equidistant?

• The answer depends on what “best” means.

Since we have 2n ‘unknowns’ wi and xi , let’s look for a set that integrates
a 2n 1 degree polynomial exactly. (Remember: a 2n 1 degree polynomial has
2n coefficients.)
ICM 127 – 201
Sampling 3
Example (Third Degree)
Set n = 3. Determine the choice of wi and of xi so that
Z 1 3

1
x p dx = Â wk · xkp
k=1
exactly for p = 0, 1, . . . , 5 = 2 · 3 1.

The range for the power p gives us six equations:


8 9
>
>
> w1 + w2 + w3 = 2 > >
>
>
> >
>
>
> w1 x1 + w2 x2 + w3 x3 = 0 > > 8 q q
>
> >
>
>
< w x2 + w x2 + w x2 = 2 > >
< x1 = 3 3
= 5, x2 = 0, x3 = 5
1 1 2 2 3 3 3
=)
> 3 3 3 > >
> w1 x1 + w2 x2 + w3 x3 = 0 >
>
> >
>
: w1 = 59 , w2 = 89 , w3 = 5
9
>
> 4 4 4 2 >
>
> w1 x1 + w2 x2 + w3 x3 = 5 >
>
> >
> >
>
: ;
w1 x15 + w2 x25 + w3 x35 = 0
✓ q ◆ ✓q ◆
5 3
Our Gaussian quadrature is G3 ( f ) = 9 f 5 + 89 f (0) + 59 f 3
5

ICM 128 – 201


Testing Gauss

Random Polynomials
Generate and test a random 5th degree polynomial.
p := unapply(sort(randpoly(x, degree = 5), x), x)
x ! 7x5 + 22x4 55x3 94x2 + 87x 56
G3 := 5/9*p(-sqrt(3/5)) + 8/9*p(0) + 5/9*p(sqrt(3/5))
2488
15
Int(p(x), x = -1..1) = int(p(x), x = -1..1)
Z 1
2488
p(x) dx = 15
1

Generate and test a random 7th degree polynomial.


q := unapply(sort(randpoly(x, degree = 7), x), x)
x ! 97x7 73x6 4x5 83x3 10x 62
int(q(x),x = -1..1)= 5/9*q(-sqrt(3/5)) + 8/9*q(0) + 5/9*q(sqrt(3/5))
722 2662
7 = 25

ICM 129 – 201


Gaussian Properties

Theorem (Error Estimate)


Z 1
Let f have 2n continuous derivatives. Then for en = Gn f (x)dx ,
p 1
en  2n · M2n
2 · (2n)!
where M2n max f (2n) (x) .

Values of Gaussian Weights and Nodes


There are numerous sources online, e.g.,:
1. The classic Abramowitz and Stegun Handbook (see the entire book)
2. Holoborodko or Kamermans
We could calculate the values directly:
h i
dn n
Set Pn (x) = 2n1n! · dx n x2 1 (the Legendre polynomials). Then
2
{xi }ni=1 = {zeros of Pn } and wi =
(1 xi2 ) [Pn0 (xi )]2
ICM 130 – 201
Gauss-Kronrod Quadrature

Aleksandr Kronrod’s Idea (1964)


One difficulty in Gaussian quadrature is that increasing the number of
nodes requires recomputing all the values of
• nodes • weights • function evaluations
Kronrod16 discovered he could interlace n + 1 new nodes with n original
Gaussian nodes and have a rule of order 3n + 1. A 2n + 1 node Gaussian
quadrature would have order 4n + 1, but with significant extra
computation

for an increase of only n in order over Kronrod’s method.
Bad news: calculating the nodes and weights is way beyond the scope of our class.
The nodes are the roots of the Stieltjes or Stieltjes-Legendre polynomials. (App IV)

Gauss-Kronrod quadrature is used in Maple, Mathematica, Matlab, and


Sage; it’s included in the QUADPACK library, the GNU Scientific Library,
the NAG Numerical Libraries, and in R. GK7,15 is the basis of numerical
integration in TI calculators. (Casio uses Simpson’s rule; HP, adaptive Romberg.)

16 Kronrod,A. S. (1964.) “Integration with control of accuracy” (in Russian),


ICM 131 – 201
Dokl. Akad. Nauk SSSR 154, 283–286.
Gauss-Kronrod Quadrature in Practice

GK7,15 (1989)
A widely used implementation is based on a Gaussian quadrature with 7 nodes.
Kronrod adds 8 to total 15 nodes.
GK7,15 on [ 1, 1]
Gauss-7 nodes Weights
7 0.00000 00000 00000 0.41795 91836 73469
G7 = Â wk f (xk ) ±0.40584 51513 77397
±0.74153 11855 99394
0.38183 00505 05119
0.27970 53914 89277
k=1
±0.94910 79123 42759 0.12948 49661 68870
15
GK7,15 = Â w j f (x j ) Kronrod-15 nodes Weights
j=1
0.00000 00000 00000 G 0.20948 21410 84728
±0.20778 49550 07898 K 0.20443 29400 75298
e7,15 ⇡ G7 GK7,15 ±0.40584 51513 77397 G 0.19035 05780 64785
±0.58608 72354 67691 K 0.16900 47266 39267
or, in practice, use17 ±0.74153 11855 99394 G 0.14065 32597 15525
⇥ ⇤3/2 ±0.86486 44233 59769 K 0.10479 00103 22250
⇡ 200 G7 GK7,15
±0.94910 79123 42759 G 0.06309 20926 29979
±0.99145 53711 20813 K 0.02293 53220 10529

ICM 132 – 201


17 Kahaner, Moler, & Nash, Numerical Methods and Software, Prentice-Hall, 1989.
GK Sample

Example
Z 1
x2
Find e dx.
1
Using Maple gives:
7
G7 = Â wk f (xk ) = 1.49364828886941
k=1

15
GK7,15 = Â wk f (xk ) = 1.49364826562485
k=1

8
e7,15 ⇡ G7 GK7,15 = 2.324456 · 10

int(f(x), x=-1..1, numeric) ⇡ 1.49364826562485 = GK7,15

See Maple’s Online Help for int/numeric to see the methods available.
ICM 133 – 201
A Class Exercise

Easy, but Hard


Z 6.4
Set f (x) = x bxc. Calculate f (x) dx.
0
Set n = 10. Find
1. The exact value 3.08
2. Left endpoint approximation
3. Right endpoint approximation
4. Midpoint approximation
5. Trapezoid rule approximation
6. Simpson’s rule approximation
7. Gauss 7 quadrature
8. Gaussian-Kronrod 7-15 quadrature

ICM 134 – 201


A Menagerie of Test Integrals

Integrals for Testing Numerical Quadratures18 , I


Lyness:
Z 2
0.1
1. I(l ) = dx
1 (x l )2 + 0.01

Piessens, de Doncker-Kapenga, Überhuber, & Kahaner:


Z 1 ✓ ◆ Z 1 a+1 a+1
1 1 1 a ( 23 ) +( 13 )
2. xa log dx = 5. x dx =
0 x (1 + a) 2
0
3 1+a

Z 1
Z 1 a p a
4 6. x dx
3. dx 0 4
p 2
0 x 4 + 16 a p a+1 + p a+1
(1 4 ) 4 ( )
1 (4 p)4a 1 = 1+a
= tan
+ tan 1 p4a 1
Z +1
1 1
7. p a
dx
Z p 1 1 x2 1 + x + 2
p
4. cos(2a sin(x)) dx = p J0 (2a ) =p
0 (1+2 a )2 1

ICM 135 – 201


18 D. Zwillinger, Handbook of Integration p. 272, (A K Peters/CRC Press, 1992).
Test Integrals, II
Integrals for Testing Numerical Quadratures, II
Piessens, et al (continued):
Z p/2 Z 1
cos(2a x)
8. sina 1
(x) dx 10. p dx
0 0 x(1 x)
G2 a2
( )
= 2a 2 = p cos(2a 1 ) J0 (2a 1)
G(a)
Z •
2 ax
11. x2 e dx = 23a+1
Z 1 ✓ ◆ 0
1 1
9. loga dx = G(a)
0 x Z •
xa 1 (1 a)p
12. dx = a
0 (1 + 10x)2 10 sin(pa)

Berntsen, Espelid, & Sørevik:


Z 1 Z 1
1/2
13. x 1
3 dx (singularity ) 15. e2|x 1/3|
dx (C0 function)
0 0

Z 2 4
Z 1 10
14. U(x) ex/2 dx 16. dx (sharp peak)
(discontinuity ) p 2 8
1
1 x 2 +10

ICM 136 – 201


Appendix IV: Legendre & Stieltjes Polynomials for GK7,15
The Polynomials of GK7,15
The Gaussian nodes for G7 are the roots of the Legendre polynomial p7
429 7 693 5 315 3 35
p7 (x) = x x + x x
16 16 16 16
The additional nodes Kronrod adds for GK
⇢ 7,15
are the roots of the Stieltjes
Z 1
polynomial E8 (from solving the system: p7 (x)E8 (x) xk dx = 0 for k = 0..7 )
1
h i
8 1142193892 6 765588166 4 501576364 2
E8 (x) = c 4854324041
52932681 x 5881409 x + 5881409 x 17644227 x + 1

ICM 137 – 201


Problems
Exercises
For each of the following functions, investigate the integrals using: left endpoint,
midpoint, trapezoid, and Simpson’s rules
Z x⇥ ⇤
p
1. S(x) = sin(t 2 /2) p/(2x) dt
0
Z 2
0.1
2. Lyness’ integral I(l ) = dx for l = p/2
1 (x l )2 + 0.01
Z 1 0.1
p2
3. Modified Piessens’ integral x2 16 dx
1

Investigate Gaussian and Gauss-Kronrod quadrature (after transforming the interval to


[ 1, 1]) of the integral
Z 2 4
10
4. dx
p 2 8
1 x 2 +10
5. Explain why the integrals
Z 1 Z 1 Z 1
p7 (x) dx E8 (x) dx p7 (x) · E8 (x) dx
1 1 1

are all zero. Use numerical methods to approximate each.


ICM 138 – 201
Links and Others
More information:
Dr John Matthews’ modules: Wikipedia entries:
• Adaptive Simpson’s Rule • Newton-Cotes formulas
• Monte Carlo Integration • Romberg’s method
• Legendre Polynomials • Clenshaw-Curtis integration
• Chebyshev Polynomials • Cubature

See also: MathWorld (Wolfram Research) and Interactive Educational Modules


in Scientific Computing (U of I)
Investigate:
• Boole’s Rule • The Maple command
• adaptive quadrature ApproximateInt in the
• orthogonal polynomials Student[Calculus1] package

• Vandermonde Matrix • Matlab’s integral command


• Maple’s evalf/Int command • Cubature
ICM 139 – 201
VI. Polynomial Interpolation

Sections
1. Polynomial Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

2. Lagrange Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

3. Interlude: Bernstein Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

4. Newton Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

5. Two Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

6. Interlude: Splines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

Links and Others . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

ICM 140 – 201


VI. Polynomial Interpolation
What is Polynomial Interpolation?
An interpolating polynomial p(x) for a set of points S is a polynomial
that goes through each point of S. That is, for each point Pi = (xi , yi ) in
the set, p(xi ) = yi .
Typical applications include:
Approximating Functions TrueType Fonts (2nd deg) Fast Multiplication
Cryptography PostScript Fonts (3rd deg) Data Compression
Since each data point determines the value of one polynomial coefficient,
an n-point data set has an n 1 degree interpolating polynomial.
8 9
>
> [ 2, 1] >
>
>
> >
<[ 1, 1]> =
S= [0, 1] p4 (x) = 23 x4 8 2
3x +1
>
> >
>
> [1, 1] >
> >
: ;
[2, 1]

ICM 141 – 201


Free Sample

Finding an Interpolating Polynomial


Let S = {[ 2, 1], [ 1, 1], [0, 1], [1, 1], [2, 1]}.
1. Since S has 5 points, we compute a 4th degree polynomial
p4 (x) = a0 + a1 x + a2 x2 + a3 x3 + a4 x4

2. Substitute the values of S into p4 ; write the results as a system of


linear equations.
2 3 2 3 2 32 3
1 a0 2a1 + 4a2 8a3 + 16a4 1 2 4 8 16 a0
6 17 6 7 6 17 6 7
6 7 6 a0 a1 + a2 a3 + a4 7 61 1 1 1 7 6a1 7
6 17 6 a0 7 = 61 0 0 0 07 6 7
6 7=6 7 6 7 6a2 7
4 15 4 a0 + a1 + a2 + a3 + a4 5 41 1 1 1 1 5 4a3 5
1 a0 + 2a1 + 4a2 + 8a3 + 16a4 1 2 4 8 16 a4

3. Solve via your favorite method: p4 (x) = 23 x4 8 2


3 x +1

ICM 142 – 201


Towards a Better Way: Lagrange Interpolation

Definition
Knots or Nodes: The x-values of the interpolation points.
Lagrange Fundamental Polynomial: Given a set of n + 1 knots, define
x xk
Li (x) = ’
k=0..n i xk
x
k 6= i
x x0 x xi 1 x xi+1 x xn
= ⇥···⇥ ⇥ ⇥···⇥
xi x0 xi xi 1 xi xi+1 xi xn

Lagrange Interpolating Polynomial: Given a set of n + 1 data points


(xk , yk ), define
n
pn (x) = Â yk Lk (x)
k=0

ICM 143 – 201


Sampling a Better Way

Example (Lagrange Fundamental Polynomials)


The set of knots [ 2, 1, 0, 1, 2] gives

Hx H
( 2) x ( 1) x 0 x 1 x 2
L0 (x) = 2 (H2) · · · ·
H 2 ( 1) 2 0 2 1 2 2
delete

x ( 2) Hx H
( 1) x 0 x 1 x 2
L1 (x) = · 1 (H1) · · ·
1 ( 2) H 1 0 1 1 1 2
delete

x ( 2) x ( 1)
L2 (x) = 0 ( 2) · 0 ( 1) ·@
x 0
0@ 0 ·
x 1
0 1 · x 2
0 2
delete

x ( 2) x ( 1)
L3 (x) = 1 ( 2) · 1 ( 1) · x 0
1 0 ·@
x 1
1@ 1 ·
x 2
1 2
delete

x ( 2) x ( 1)
L4 (x) = 2 ( 2) · 2 ( 1) · x 0
2 0 · x 1
2 1 ·@
x 2
2@ 2
delete

Graph the Lk !
ICM 144 – 201
Sampling a Better Way

Example
Let S = {[ 2, 1], [ 1, 1], [0, 1], [1, 1], [2, 1]}.

We have [xk ] = [ 2, 1, 0, 1, 2] and [yk ] = [1, 1, 1, 1, 1]. Then


4
p4 (x) = Â yk Lk (x). So
k=0
x+1 x 0 x 1 x 2
p4 (x) = (1) · 2+1 · 2 0 · 2 1 · 2 2
+ ( 1) · x+2 x 0 x 1
1+2 · 1 0 · 1 1 · 1 2
x 2

x+2 x+1 x 1 x 2
+ (1) · 0+2 · 0+1 · 0 1 · 0 2
x+2 x+1 x 0 x 2
+ ( 1) · 1+2 · 1+1 · 1 0 · 1 2
x+2 x+1 x 0 x 1
+ (1) · 2+2 · 2+1 · 2 0 · 2 1

Then simplified p4 (x) = 23 x4 8 2


3 x + 1.

ICM 145 – 201


An ‘Easier’ Lk Formula

Compact Expressions
x xk
For a set [xk ] of n + 1 knots, we defined Li (x) = ’ . This formula is
k=0..n i xk
x
k 6= i
computationally intensive.
n
Set w(x) = ’ (x xk ).
k=0
1. The numerator of Li is w(x)/(x xi )
2. The denominator of Li is w(x)/(x xi ) evaluated at xi . Rewrite as
w(x) w(x) w(xi )
= . Take the limit as x ! xi :
x xi x xi
w(x) w(xi )
lim = w 0 (xi )
x!xi x xi

w(x)
Thus Li (x) = . A very compact formula!
(x xi ) w 0 (xi )

ICM 146 – 201


Properties of the Lk ’s
Proposition
n
For the Lagrange interpolating polynomial pn (x) = Â yk Lk (x),
k=0
1. pn (x) is the unique nth degree polynomial s.t. p(xk ) = yk for k = 0..n.
(
1 j=k
2. Lk (x j ) = dk j = (See Kronecker delta.)
0 j 6= k
n
3. Â Lk (x) = 1
k=0
4. If q(x) is a polynomial of degree  n with yk = q(xk ), then q ⌘ pn
5. The set {Lk (x) : k = 0..(n 1)} is a basis of Pn 1

Theorem (Lagrange Interpolation Error)


If f 2 C n+1 [a, b] and {xk } 2 [a, b], then
(b a)n+1
en = | f (x) pn (x)|  max | f (n+1) (x)|
(n + 1)!
ICM 147 – 201
Drawbacks

More Knots
To decrease the error, use more knots. But . . . all the Lk (x) change.
1. Set {xk } = { 2, 1, 2}. Then
x 1 x 2 1 2 1 1
L0 (x) = · = 12 x 4x+ 6
2 1 2 2
x+2 x 2
L1 (x) = · = 13 x2 + 43
1+2 1 2
x+2 x 1
L2 (x) = · = 14 x2 + 14 x 12
2+2 2 1

2. Set {xk } = { 2, 1, 1, 2}. Then


x+1 x 1 x 2 1 3
L0 (x) = · · = 12 x + 16 x2 + 12
1
x 1
6
2+1 2 1 2 2
x+2 x 1 x 2 1 3 1 2 2 2
L1 (x) = · · = 16 x 6x 3x+ 3
1+2 1 1 1 2
x+2 x+1 x 2
L2 (x) = · · = 16 x3 16 x2 + 23 x + 23
1+2 1+1 1 2
x+2 x+1 x 1 1 3
L3 (x) = · · = 12 x + 16 x2 121
x 16
2+2 2+1 2 1

ICM 148 – 201


Interlude: Bernstein Polynomials
Definition (Bernstein Polynomials of f )
n
Bernstein Basis Polynomials: bn,k (x) = k xk (1 x)n k for k = 0..n

Bernstein Polynomial of f : Let f : [0, 1] ! R. Then


n ✓ ◆✓ ◆
k n k
Bn ( f ) = Â f x (1 x)n k
k=0 n k

Note: If g : [a, b] ! R, then use f (x) = g a + (b a)x

Example
n
k3 k
Let f (x) = x3 for x 2 [0, 1]. Then Bn ( f ) = Â n3 n
xk (1 x)n k
k=0

B1 (x) = x B2 (x) = 14 x + 34 x2

B3 (x) = 19 x + 23 x2 + 19 x3 B4 (x) = 1
16
9 2
x + 16 x + 38 x3

ICM 149 – 201


Bernstein Basis Functions
Bernstein Basis Functions, n = 3
k=0: b3,0 (x) = (1 x)3 k=1: b3,1 (x) = 3x(1 x)2
k=2: b3,2 (x) = 3x2 (1 x) k=3: b3,3 (x) = x3

ICM 150 – 201


Bernstein and Lagrange
1
Example ( f (x) = Heaviside(x 2 ))

p1 B1 ( f )

p4 B4 ( f )
ICM 151 – 201
Newton Interpolation

Newton Basis Polynomials


In order to make it easy to add a new knot, we change the set of basis
polynomials. Given a set of n + 1 knots, {xk }, set
N0 (x) = 1
N1 (x) = (x x0 )
N2 (x) = (x x0 )(x x1 )
N3 (x) = (x x0 )(x x1 )(x x2 )
..
.
Nn (x) = (x x0 )(x x1 )(x x2 ) · · · (x xn 1 )

Now let n
Pn (x) = Â ak Nk (x)
k=0

Note that BN = {Nk (x) | k = 0..n} forms a basis for Pn .

ICM 152 – 201


The Newton Coefficients
Calculating the ak ’s
For a set of n + 1 data points {[xk , yk ]}, define the (forward) divided
di↵erences recursively as
[y0 ] = y0
[y1 ] [y0 ]
[y0 , y1 ] =
x1 x0
[y1 , y2 ] [y0 , y1 ]
[y0 , y1 , y2 ] =
x2 x0
..
.

Then the Newton interpolating polynomial is


Pn (x) = [y0 ] + [y0 , y1 ](x x0 ) + [y0 , y1 , y2 ](x x0 )(x x1 )
+ [y0 , y1 , y2 , y3 ](x x0 )(x x1 )(x x2 ) + . . .
n
= Â [y0 , . . . , yk ]Nk (x)
k=0

ICM 153 – 201


First Sample
A Used Polynomial
Let S = {[ 2, 1], [ 1, 1], [0, 1], [1, 1], [2, 1]}.
Begin by building a di↵erence tableau.
x 2 1 0 1 2
y 1 1 1 1 1
[y0 ] 1 1 1 1 1
[y0 , y1 ] 2 2 2 2
[y0 , y1 , y2 ] 2 2 2
4 4
[y0 , y1 , y2 , y3 ] 3 3
2
[y0 , y1 , y2 , y3 , y4 ] 3
n
Then P4 (x) = Â [y0 , . . . , yk ]Nk (x)
k=0
4 2
= 1 2(x + 2) + 2(x + 2)(x + 1) 3 (x + 2)(x + 1)(x)+ 3 (x + 2)(x + 1)(x)(x 1)
8 2 2 4
P4 (x) = 1 3x + 3x
ICM 154 – 201
Second Sample

The Heaviside Function


Set S = [0, 0], [ 15 , 0], [ 25 , 0], [ 35 , 1], [ 45 , 1], [1, 1] .
Begin by building a di↵erence tableau.
x 0 0.2 0.4 0.6 0.8 1
y 0 0 0 1 1 1
[y0 , y1 ] 0 0 5 0 0
25 25
[y0 , y1 , y2 ] 0 2 2 0
125 125 125
[y0 , y1 , y2 , y3 ] 6 3 6
625 625
[y0 , y1 , y2 , y3 , y4 ] 8 8
625
[y0 , y1 , y2 , y3 , y4 , y5 ] 4

5
Then P5 (x) = Â [y0 , . . . , yk ]Nk (x)
k=0

137 875 2 1000 3 3125 4 625 5


P5 (x) = 12 x 8 x + 3 x 8 x + 4 x

ICM 155 – 201


Two Comparison
Example (cos(px) with Lagrange & Taylor polynomials)

(x+1)(x)(x 1)(x 2) (x+2)(x)(x 1)(x 2) (x+2)(x+1)(x 1)(x 2)


Lagrange: L4 = 24 + 6 + 4
(x+2)(x+1)(x)(x 2) (x+2)(x+1)(x)(x 1)
+ 6 + 24
4
Newton: N4 = 1 2(x + 2) + 2(x + 2)(x + 1) 3 (x + 2)(x + 1)(x)
+ 23 (x + 2)(x + 1)(x)(x 1)
p2 2 p4 4
Taylor: T4 = 1 2 x + 24 x
ICM 156 – 201
Second Comparison

Shifted Heaviside: f (x) = Heaviside(x 1/2) on [0, 1]


3125 1 2 4
Lagrange: L5 = 12 (x) x 5 x 5 x 5 (x 1)
3125 1 2 3
24 (x) x 5 x 5 x 5 (x 1)
625 1 2 3 4
+ 24 (x) x 5 x 5 x 5 x 5

125 1 2
Newton: N5 = 6 (x) x 5 x 5
625 1 2 3
8 (x) x 5 x 5 x 5
+ 625
4 (x) x 1
5 x 2
5 x 3
5 x 4
5

Bernstein: B5 = 10 x3 (1 x)2 + 5 x4 (1 x) + x5

Taylor: T5 centered at the middle a = 12 : Not possible. (Why? )


Centered at a 2 [0, 12 ), T5 = 0
Centered at a 2 ( 12 , 1], T5 = 1
ICM 157 – 201
Interlude: Splines

Splines
Lagrange and Newton polynomials oscillate excessively when there are a
number of closely spaced knots. To alleviate the problem, use “splines,”
piecewise, smaller-degree polynomials with conditions on their
derivatives. The two most widely used splines:
Bézier splines are piecewise Bernstein polynomials [Casteljau (1959) and
Bézier (1962)].
Cubic B-splines are piecewise cubic polynomials with second derivative
equal to zero at the joining knots [Schoenberg (1946)].

Along with engineering, drafting, and CAD, splines are used in a wide
variety of fields. TrueType fonts use 2-D quadratic Bézier curves.
PostScript and MetaFont use 2-D cubic Bézier curves.

ICM 158 – 201


Exercises, I

Exercises
For each of the functions given in 1. to 5.:
• Find the Lagrange polynomial of order 6
• Find the Newton polynomial of order 6
• Find the Bernoulli polynomial of order 6
and plot the interpolation polynomial with the function.
x
1. f (x) = sin(2px) on [0, 1] 4. k(x) = 2 on [ 10, 10]
x +1
Z xh p i
2. g(x) = ln(x + 1) on [0, 2] 5. S(x) = sin( 12 t 2 ) 2xp dt
0
3. h(x) = tan(sin(x)) on [ p, p] for x 2 [0, 10]

6. Find an interpolating polynomial for the data given below. Plot the
polynomial with the data.

0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0
4.2 2.2 2.0 8.7 5.7 9.9 0.44 4.8 0.13 6.4

ICM 159 – 201


Exercises, II

Exercises
7. An error bound for Newton interpolation with n + 1 knots {xk } is
n
| f (x) N(x)|  1
(n+1)!
· max f (n+1) (x) · ’ (x xk )
k=0
Show this bound is less than or equal to the Lagrange interpolation error
bound. How does this make sense in light of the unicity of interpolation
polynomials? (NB: The formula for Newton interpolation also applies to
Lagrange interpolation.)
8. Investigate interpolating Runge’s “bell function” r(x) = e x2 on the
interval [ 5, 5]
a. with 10 equidistant knots.
b. with “Chebyshev knots” xk = 5 cos((n j)p/n) with j = 0..10.
9. Write a Maple function that produces a di↵erence tableau for a data set.
Test your function with the data set produced by
> myData := [seq([k, rand( 9..9)()], k = 1..10)];

ICM 160 – 201


Links and Others
More information:

Dr John Matthews’ modules: Wikipedia entries:


• Lagrange Interpolation • Lagrange Polynomial
• Newton Interpolation • Bernstein Polynomials
• Legendre Polynomials • Newton Polynomial
• Chebyshev Polynomials • Wavelets

See also: MathWorld (Wolfram Research)


Investigate:
• Aitken Interpolation • Vandermonde Matrix
• Extrapolation • The Maple command
PolynomialInterpolation in the
• Gauss’s Interpolation Formula
CurveFitting package
• Hermite Interpolation • Matlab’s fit command
• Newton-Cotes Formulas • Splines and Bézier curves
• Thiele’s Interpolation Formula • Rational Interpolation

ICM 161 – 201


S III. Case Study: TI Calculator Numerics

Sections
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

2. Floating Point Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165


3. Numeric Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .167
4. Numerically Finding Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

5. Numeric Quadrature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169


6. Transcendental Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Appendix V:TI’s Solving Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

ICM 162 – 201


S III. Case Study: TI Calculator Numerics
Introduction
Texas Instruments started a research project in 1965 to
design a pocket calculator. The first pocket calculators
appeared in the early 1970 from the Japanese companies
Sanyo, Canon, and Sharp. The HP-35 (it had 35 keys)
was the first scientific pocket calculator, introduced by
Hewlett Packard in 1972 for $395. In 1974, HP released
the HP-65, the first programmable pocket calculator.

Texas Instruments’ TI-8x series is based on the Zilog


Z-80 processor (1976), an 8-bit cpu originally running
at 2 MHz. The TI-81 came out in 1990 with a 2 MHz
Z80 and 2.4 KB RAM. The TI-84 Plus CE (released in
2015) has a 48 MHz Z80 with 4 MB Flash ROM and 256
KB RAM. In 2013, the calculators’ displays upgraded to
320 ⇥ 240 pixels from the 1990’s 96 ⇥ 64 pixels.19

19 The
Nspire series uses an ARM processors.
ICM 163 – 201
WabbitEmu is a calculator emulator for Linux, Mac, & Windows.
TI-80 Series Calculators
Timeline of the TI-80 Series
Model Year Z80 Processor RAM KB / ROM MB
TI-81 1990 2 MHz 2.4 / 0
TI-82 1993 6 MHz 28 / 0
TI-83 1996 6 MHz 32 / 0
TI-83 Plus 1999 6 MHz 32 / 0.5
TI-83 Plus SE 2001 15 MHz 128 / 2
TI-84 Plus 2004 15 MHz 128 / 1
TI-84 Plus SE 2004 15 MHz 128 / 2
TI-84 Plus C 2013 15 MHz 128 / 4
TI-84 Plus CE 2015 48 MHz 256 / 4

TI-81 TI-82 TI-83 TI-83+ TI-83+SE TI-84+ TI-84+SE TI-84+CE

ICM 164 – 201


TI Floating Point
TI Floating Point Structure
TI’s numeric model is not IEEE-754 compliant. The floating point format is
9 Bytes
0 +1 +2 +3 +4 +5 +6 +7 +8
s/T EXP DD DD DD DD DD DD DD

s/T: Sign and Type Byte


8 Bits
7 6 5 4 3 2 1 0
SIGN reserved TYPE

Floating point types: Real: 0; Complex: 0Ch; (List: 01h; Matrix: 02h; etc.)
EXP: Power of 10 exponent coded in binary, biased by 80h
DD: Mantissa in BCD, 7 bytes of two digits per byte. While the mantissa
has 14 digits, only 10 (+2 exponent digits) are displayed on the screen.
(Many math routines use 9 byte mantissas internally to improve accuracy.)

Examples: 3.14159265 = 00 80 31 41 59 26 50 00 00
230.45 = 80 82 23 04 50 00 00 00 00

ICM 165 – 201


TI Floating Point Software
TI Floating Point Software
There are six RAM locations called the “Floating Point Registers” OP1 to
OP6; each is 11 bytes (with 9 byte mantissas); they are used extensively for
floating point computations. The routines listed below, called in assembly
programs, operate on the value in OP1 unless noted.
TI’s operating system20 includes the functions:
Standard Function Transcendental
FPAdd (OP1+OP2) Ceiling Sin ASin
FPSub (OP1 OP2) Int Cos ACos
FPRecip (1/OP1) Trunc Tan ATan
FPMult (OP1⇥OP2) Frac SinH ASinH
FPDiv (OP1÷OP2) Round CosH ACosH
FPSquare (OP1⇥OP1) RndGuard (to 10 d) TanH ATanH
SqRoot RnFx (to FIX) LnX EToX
Factorial (n · 0.5 0.5) Random LogX TenX (10OP1 )
Max(OP1, OP2) RandInt
Min(OP1, OP2)

ICM 166 – 201


20 See TI-83 Plus Developer Guide (also covers the TI-84 series).
Numeric Derivatives
nDeriv
TI uses a centered di↵erence formula
f (a + e) f (a e)
f 0 (a) ⇡ .
2e
The default stepsize is e = 0.001. The command can’t be nested and
doesn’t check whether or not f is di↵erentiable at a.
Syntax: nDeriv(expression,variable,value [,e])

(Screen images are from a TI-84+ SE with the 2.55MP OS.)

ICM 167 – 201


Numerically Finding Roots
solve
TI uses a blend of the bisection and secant root-finding algorithms. (See
Appendix V.) The default initial interval is [ 1099 , 1099 ]. Solve does not
find roots of even multiplicity since the algorithm requires a sign change.
(solve is available only through catalog or the Solver application.)
To find a di↵erent root, use a starting value close to the desired new
solution; a graph is a good ‘initial value generator.’
Syntax: solve(expression,variable,initial guess)

ICM 168 – 201


Numeric Quadrature
fnInt
TI uses an adaptive Gauss-Kronrod 7-15 quadrature
Z 1 15

1
f (x) dx ⇡ Â f (xk ) · wk .
k=1

The default error tolerance is e = 10 5 . The command can’t be nested


and doesn’t check if f is integrable over [a, b].
Syntax: fnInt(expression,variable,lower,upper [,e])

ICM 169 – 201


Transcendental and Other Functions

Numeric Function Calculations


• For trigonometric, logarithmic, and exponential functions, TI uses a
modified CORDIC algorithm. CORDIC’s standard ‘rotations’ of 2 k are
replaced with 10 k . (See the CORDIC Project.)
• The factorial x! where x is a multiple of 12 for 1
2  x  69 is computed
recursively using 8
<x · (x 1)!
> x>0
x! = 1 x=0
>
:p 1
p x= 2

ICM 170 – 201


Appendix V: TI’s Solving Algorithm

Bisection and Secant Combined


The solve function and the Solver application use a clever, modified
combination of the secant method and bisection.21 The logic is:

1. Order the bracketing points a and b so that | f (b)|  | f (a)|.


2. Calculate a new point c using the secant method.
3. If c is:
a. outside the interval, replace c with the midpoint (bisection),
b. too close to an endpoint (within h), replace c with c = b ± h, a
specified minimum step in the interval
4. The new bracketing points are a & c or b & c, whichever pair has the sign
change.
5. If the error tolerance is met or the number of iterations is maximum, then
return c, otherwise, go to Step 1.

21 “Solve() uses a combination of bisection and the secant method, as described in

Shampine and Allen Numerical Computing: An Introduction, Saunders, 1973.”


ICM 171 – 201
(Pp. 96–100 and 244.) according to the TI 85 & 89 Knowledge Bases.
Exercises, I
Problems
1. Enter 1 + 1ee 13. Now enter Ans 1. Explain the result.

2. Enter p 3.141592654 on a TI-84 and a TI Nspire CAS. Explain the


di↵erent results.

3. Explain the result of nDeriv(|x|,x,0).

10 8 (x p/2)2
4. Define Y1 to be the function f (x) = . Explain the
10 16 + (x p/2)2
results from using solve with an ‘initial guess’ of:
a. 0
b. 1.5
Z 1
1 8/9
5. Define f by f (x) = x 3 . Compare evaluating f (x) dx with
1
a. a TI-84+ SE,
b. Maple.

ICM 172 – 201


Exercises, II
Problems
Z •
6. Using the Gamma function, we can define x! = G(x + 1) = zx e z
dz.
0
Compute (1/4)! using the Gamma function with a calculator and with
Maple.
Z 1
p
7. Investigate the integral dx numerically and symbolically. First,
1 p 3x
39

graph the integrand.

8. Report on how the calculator computes points to draw a graph.


p
9. Compare graphs of s(x) = 3 x over the interval 1  x  1 from the
calculator and from Maple. Explain the di↵erences.

10. Explain the possible sources of error when the calculator computes
Z 1⇣ ⇣ ⌘ ⌘
d
dt T 10 dX
0 T =X

using nDeriv and fnInt.


ICM 173 – 201
Projects

ICM 174 – 201


VII. Projects
The Project List
• One Function For All . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
I. Control Structures
• A Bit Adder in Excel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
• The Collatz Conjecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
• The CORDIC Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
• The Cost of Computing a Determinant . . . . . . . . . . . . . . . . . . . . . . . . 184
II. Numerical Di↵erentiation
• Space Shuttle Acceleration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
III. Root Finding
• Commissioner Loeb’s Demise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
• Roots of Wilkinson’s “Perfidious Polynomial” . . . . . . . . . . . . . . . . . . 190
• Bernoulli’s Method and Deflation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
IV. Numerical Integration
• The Fourier Power Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .196
V. Polynomial Interpolation
• Spline Fit to a Transition Curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
ICM 175 – 201
One Function for All
Project: One Function for All, The Normal Distribution
The mean and standard deviation for total SAT scores for 2019 are µ = 1059
(from Evidence-Based Reading 531 + Math 528) and s = 210, respectively.22
Define the function Z x
1 1 t µ 2
F(x) = p e 2 ( s ) dt
2ps 2 •

1. Estimate the derivative of F(x) for student scores that are one standard
deviation above the mean.

2. The minimum score for students scoring in the top 10% is found by
solving 0.10 = 1 F(x). Use a root finding method to find x.

3. Use a quadrature rule to evaluate F(1190) — Appalachian State


University’s mean score for entering first-year students in 2019.

4. Use an interpolating polynomial to approximate F(x) for x 2 [1100, 1500].

ICM 176 – 201


22 From 2019 SAT Suite of Assessments Annual Report, The College Board.
A Bit Adder in Excel

Project: Adding Bits in Excel


1. Implement the one bit adder in
Excel using IF-THEN statements
and AND, OR, and NOT functions. a" b" c0"

F(a,b,c0)%=%(s,c1)%

2. Test your design with all 8 triples XOR$ AND$


a$
0%
b$
0%
c0"
0%
s$
0%
c1"
0%
of 1 bit values. 0% 0% 1% 1% 0%
0% 1% 0% 1% 0%
AND$ 0% 1% 1% 0% 1%

3. Make an eight bit adder with 1%


1%
0%
0%
0%
1%
1%
0%
0%
1%

carries. XOR$ OR$


1%
1%
1%
1%
0%
1%
0%
1%
1%
1%

4. Test your design with 10 random


pairs of 8 bit numbers. s" c1"

One Bit Adder with Carry


5. Develop a mathematical model for
the cost of computing the sum of
two eight bit values.

ICM 177 – 201


The Collatz Conjecture Project
Lothar Collatz’s Proposition
Lothar Collatz posed a problem in 1937 that is quite simple to state, but that
still eludes proof. For any positive integer n define the sequence
(
1
a if ak is even
a1 = n and ak+1 = 2 k for k > 1.
3ak + 1 if ak is odd

Collatz conjectured the sequence would always reach 1 no matter the starting
value n 2 N.

Conjecture (Collatz’s 3n + 1 Problem)


For every n 2 N, there is a k 2 N such that sequence above has ak = 1.

Currently. . .
The conjecture has been verified for all starting values up to 87 · 260 ⇡ 1020 .
Read “The 3x + 1 Problem” by J. Lagarias (January 1, 2011 version) and check
Eric Roosendaal’s web site http://www.ericr.nl/wondrous/.

ICM 178 – 201


The Collatz Conjecture
Project
1. Write a program, Collatz, that determines the total stopping time of a
starting value n. That is, given a1 = n, find the first k such that ak = 1.
Define Collatz(n) = k.
2. Explain why Collatz(2m ) = m.
3. Generate a point-plot of the sequence [n, Collatz(n)] for n = 1 to 100,000.
4. Use your graph to find the maximum value of Collatz(n) for n from 1 to
100,000.
5. Which initial value n  106 produces the largest total stopping time?
(Careful! For example, Collatz(159,487) = 183, but before a183 = 1 this
sequence hits a67 = 17,202,377,752, a value that very much overflows an
unsigned 32-bit integer.)
6. Report on the history of Collatz’s conjecture.

Extra For Experts: Prove that Collatz(2m n) = m + Collatz(n).


ICM 179 – 201
The CORDIC Algorithm
Background
In 1959, Jack Volder designed a way to compute trigonometric functions
very quickly, the CORDIC algorithm.23 while working on digitizing the
navigation system of the B58 Hustler, the first Mach 2-capable super-
sonic bomber. During the ’70’s, John Walther extended CORDIC to
compute exponentials, logs, and hyperbolic trigonometric functions. The
algorithm became the standard for calculators (using BCD).

CORDIC Recurrence Equations


xk+1 = xk 2 k m dk yk
yk+1 = yk + 2 k dk xk
zk+1 = zk dk sk
where m = 1 (trig), 0 (arith), or 1 (hypertrig), dk is ±1, and sk is a scaling
factor.

22 Jack Volder, “The CORDIC Computing Technique,” 1959 Proceedings of the

Western Joint Computer Conference, pg 257-261; and,


ICM 180 – 201
—, “The Birth of CORDIC,” J. VLSI Signal Proc. 25, 2000, pg. 101-105.
The CORDIC Parameters

Parameter Choices
Rotation Mode Vectoring Mode
dk = sgn(zk ) (zk ! 0) dk = sgn(yk ) (yk ! 0)

m=1 hx0 , y0 , z0 i = hK, 0, q i hx0 , y0 , z0 i = hx, y, 0i


sk = tan 1 (2 k ) xn ! cos(q ); yn ! sin(q ) zn ! tan 1 (y/x)

m=0 hx0 , y0 , z0 i = hx, 0, zi hx0 , y0 , z0 i = hx, y, 0i


sk = 2 k yn ! x ⇥ z zn ! y/x
m= 1 hx0 , y0 , z0 i = hK 0 , 0, q i hx0 , y0 , z0 i = hx, y, 0i
1 k) 1
sk = tanh (2 xn ! cosh(q ); yn ! sinh(q ) zn ! tanh (y/x)i
(some sk repeated) hx0 , y0 , z0 i = hK 0 , 0, q i hx0 , y0 , z0 i = hw + 1, w 1, 0i
(k = 4, 13, 40, 121, . . . ) xn + yn ! eq zn ! 1
2 ln(w)

n n
K = ’ cos(sk ), K 0 = ’ cosh(sk )
j=0 j=0

ICM 181 – 201


CORDIC in Maple

CORDIC Trigonometric Functions with Maple


CORDIC[Trig] := proc(t)
local n, K, x, y, z, j, del;
n := 47;
K := cos(arctan(1.0));
for j to n-1 do
K := K.cos(arctan(2. j ))
end do;
(x[1], y[1], z[1]) := (K, 0, evalf(t));
for j to n+1 do
del := sign(z[j]);
if del = 0 then del := 1 end if;
x[j+1] := x[j] - del.y[j].2. j+1 ;
y[j+1] := y[j] + del.x[j].2. j+1 ;
z[j+1] := z[j] - del.arctan(2. j+1 );
end do;
return (fnormal([x[j], y[j], z[j]]));
end proc:

ICM 182 – 201


The CORDIC Project

Project
1. Modify the Maple program CORDIC[Trig] so as to compute arctan(q ).
2. Write a Maple program CORDIC[HyperTrig] that computes hyperbolic
trigonometric functions using CORDIC.
3. Write a Maple program CORDIC[Exp] that computes the exponential
function using CORDIC.
4. Write a Maple program CORDIC[Ln] that computes the logarithmic
function using CORDIC.
5. Report on the complex number basis of the CORDIC algorithm.
6. Create a presentation on the history of the CORDIC algorithm.

Extra For Experts:


Write a single program that computes all possible CORDIC outputs.

ICM 183 – 201


The Cost of Computing a Determinant

Project: The Computation Cost of a Determinant


1. a. Define a function in Maple that produces an arbitrary square matrix:
> M := n ! Matrix(n,n, symbol=a):
b. Define a ‘shortcut’ for determinant:
> det := A ! LinearAlgebra[Determinant](A):
c. Define a function for calculating the computation cost of a
determinant ignoring finding subscripts:
> Cost := expr ! subs(subscripts = 0, codegen[cost](expr)):
d. Test your functions with:
> A := M(2):
> A, det(A), Cost(det(A));

2. Write a loop that computes the cost of a 1 ⇥ 1 determinant up to a


10 ⇥ 10 determinant. (A 10 ⇥ 10 determinant can take nearly 20 minutes.)

3. Develop a mathematical model for the cost of computing a


determinant in terms of its dimension n.
ICM 184 – 201
Space Shuttle Acceleration

Space Shuttle Acceleration: The Situation23


The Space Shuttle had three phases from ignition to achieving orbit. The first phase
began by the shuttle lifting o↵ the launch pad using solid rocket boosters (SRBs) to
accelerate extremely quickly. At approximately two minutes, the SRBs were separated
and fell back to Earth.
The second phase began with SRB separation and lasted approximately 6.5 minutes.
The shuttle speed increased to 17,500 mph — the speed needed to achieve orbit.
(Note this speed is good deal less than the Earth’s escape velocity 25,000 mph.)
The third phase began at about 9 minutes when the fuel external tank was jettisoned
and the shuttle entered orbit.
NASA’s data (next pg) lists the time, altitude, and velocity for STS-121 from July 4,
2006.
Project.
1. Using di↵erent divided di↵erence formulas, compute and graph the shuttle’s
acceleration. Compare the results from each method.
2. Determine the maximum acceleration and its time.
3. Approximate the acceleration when the shuttle entered orbit at 9 minutes.
4. Explain why the shuttle’s velocity cannot be computed from DAltitude/Dtime.

ICM 185 – 201


23 Adapted from NASA’s “Space Shuttle Ascent”.
Space Shuttle Acceleration, II
The Data
Time Altitude Velocity Time Altitude Velocity
(s) (m) (m/s) (s) (m) (m/s)
0 0 0 280 105,321 2651
20 1,244 139 300 107,449 2915
40 5,377 298 320 108,619 3203
60 11,617 433 340 108,942 3516
80 19,872 685 360 108,543 3860
100 31,412 1026 380 107,690 4216
120 44,726 1279 400 106,539 4630
140 57,396 1373 420 105,142 5092
160 67,893 1490 440 103,775 5612
180 77,485 1634 460 102,807 6184
200 85,662 1800 480 102,552 6760
220 92,481 1986 500 103,297 7327
240 98,004 2191 520 105,069 7581
260 102,301 2417

ICM 186 – 201


Commissioner Loeb’s Demise24

The Situation
Commissioner Loeb was murdered in his office. Dr. “Ducky” Mallard,
NCIS coroner, measured the corpse’s core temperature to be 90 F at
8 : 00 pm. One hour later, the core temperature had fallen to 85 F.
Looking through the HVAC logs to determine the ambient temperature,
Inspector Clouseau discovered that the air conditioner had failed at 4 : 00
pm; the Commissioner’s office was 68 F then. The log’s strip chart shows
Loeb’s office temperature rising at 1 F per hour after the AC failure; at
8 : 00 pm, it was 72 F.

Inspector Clouseau believes Snidely Whiplash murdered the


Commissioner, but Whiplash claims he was being interviewed by Milo
Bloom, sta↵ reporter of the Bloom Beacon, at the time. Bloom’s
interview started at 6 : 30 pm and lasted until 7 : 15. Whiplash’s lawyer,
Horace Rumpole, believes he can prove Snidely’s innocence.

ICM 187 – 201


23 Adapted from A Friendly Introduction to Numerical Analysis by Brian Bradie.
Commissioner Loeb’s Demise

First Steps
1. The office temperature is Tambient = 72 + t where t = 0 is 8 : 00 pm.

2. Newton’s Law of Cooling applied to the corpse’s temperature gives


dT
= k(T Tambient ) = k(T t 72) with T0 = 90
dt

3. A little di↵erential equation work (with an integrating factor ) yields


 
1 kt 1
T (t) = 72 + t + e · 18 +
k k

4. To find k, use the other data point. Set T (1) = 85 F, then solve for k.

5. Last, solve T (tD ) = 98.6 for tD , the time of death.

ICM 188 – 201


Commissioner Loeb’s Demise
Project
1. Use the four methods, Bisection, Newton, Secant, and Regula Falsi, to
a. Find the value of k using ‘First Steps 4.’
b. Find td , the time of death of Commissioner Loeb, from ‘First
Steps 5.’
c. What was the temperature of the Commissioner’s office at tD ?
d. How does error in the value of k e↵ect error in the computation of
tD ?
2. Compare the four methods and their results.
3. Graph T (t) over the relevant time period.
4. Chief Inspector Charles LaRousse Dreyfus answers the press’s questions:
• Is Inspector Clouseau right?
• Could Snidely Whiplash have killed Commissioner Loeb?
• Will Horace Rumpole get another client o↵?
• Will Milo Bloom win a Pulitzer?
• Will Bullwinkle finally pull a rabbit out of his hat?

ICM 189 – 201


Wilkinson’s Perfidious Polynomial
W (x) = x20
210 x19
+ 20615 x18
1256850 x17 3!10
12

+ 53327946 x16
1672280820 x15 2!10
12

+ 40171771630 x14
756111184500 x13 1!10
12

+ 11310276995381 x12
135585182899530 x11 0 5 10 15 20

+ 1307535010540395 x10
10142299865511450 x9 -1!1012

+ 63030812099294896 x8
311333643161390640 x7 -2!1012

+ 1206647803780373360 x6
3599979517947607200 x5 -3!1012

+ 8037811822645051776 x4 Plot Window: [ 2, 23] ⇥ [ 3 · 1012 , +3 · 1012 ]


12870931245150988800 x3
+ 13803759753640704000 x2
8752948036761600000 x
+ 2432902008176640000

24 See: James H Wilkinson, “The Perfidious Polynomial,” in Studies in Numerical


ICM 190 – 201
Analysis, ed G Golub, 1984, MAA, pp 3–28.
Wilkinson’s Polynomial’s Roots

Red box: root of w(x)


Blue circle: root of w p (x) = w(x) + 10 23 x19

ICM 191 – 201


Wilkinson’s Polynomial’s Roots

The Project
1. Describe what happens when trying find the root at x = 20 using
Newton’s method.
2. Describe what happens when trying find the root at x = 20 using
Halley’s method.
3. Discover what happens when the constant term is perturbed. I.e.,
investigate the roots of p(x) = w(x) + 106 .
4. Discover what happens when the x1 term is perturbed. I.e.,
investigate the roots of p(x) = w(x) + x.

ICM 192 – 201


Bernoulli’s Method for Polynomial Roots

Bernoulli’s Method (1728)25


Let p(x) = xn + an 1 xn 1 + · · · + a0 be a polynomial (wolog assuming p is
monic) and let r be the root of p with the largest magnitude. If r is real and
simple, define the sequence {xk } recursively by:

xk = an 1 xk 1 an 2 xk 2 ··· a0 xk n k = 1, 2, . . .
x0 = 1, x 1=x 2 = ··· = x n+1 = 0

Then
xn+1
!r
xn

• Bernoulli’s method works best when p has simple roots and r is not ‘close’
to p’s next largest root.
• “If the ratio does not tend to a limit, but oscillates, the root of greatest
modulus is one of a pair of conjugate complex roots.” (Whittaker &
Robinson, The Calculus of Observations, 1924.)

ICM 193 – 201


25 in Daniel Bernoulli, Commentarii Acad. Sc. Petropol. III. (1732)
Deflation

Deflating a Polynomial
Let p(x) be a polynomial. If a root r of p is known, then the deflated
polynomial q(x) is
p(x)
p1 (x) =
x r
The coefficients of p1 are easy to find using synthetic division.

The Technique
1. Use Bernoulli’s method to find r, the largest root of p
2. Deflate p to obtain p1
3. Repeat to find all roots.
Problem: Since there is error in r’s computation, there is error in p1 ’s
coefficients. Error compounds quickly with each iteration.

ICM 194 – 201


The Bernoulli Project

The Project
1. Expand Wilkinson’s “perfidious polynomial” into standard form
20
W (x) = ’ (x k) = an xn + an 1 xn 1
+ · · · + a0
k=1

2. Use 50 iterations of Bernoulli’s method to find the largest magnitude


root. What is the relative error?
3. Determine W1 (x), the deflated polynomial using the root from 2.
4. Use 50 iterations of Bernoulli’s method to find the largest magnitude
root of W1 (x). What is the relative error?
5. Determine W2 (x), the deflated polynomial using the root from 4.
6. Use 50 iterations of Bernoulli’s method to find the largest magnitude
root of W2 (x). What is the relative error?
7. Discuss the propagation of error in the deflations.
ICM 195 – 201
The Fourier Power Spectrum
Calculating the Power Spectrum of a Signal
The Power Spectrum of a signal f gives the amount of power of a specific
frequency component in the signal. The value at the frequency w is given by
q
P(w) = a2w ( f ) + b2w ( f )
where aw ( f ) and bw ( f ) are the Fourier coefficients of f at the frequency w.
The Fourier coefficients a and b are computed with the integrals (w 1)
Z p Z p
1 1
aw ( f ) = f (t) cos(wt) dt and bw ( f ) = f (t) sin(wt) dt
p p p p

Square Wave SW (t) = | sin(2t)|/ sin(2t)


Spectrum Graph of SW
The Spectrum Plot shows the power level at each frequency.
ICM 196 – 201
The Fourier Power Spectrum, II
The Project: Produce a Spectrum Plot of the Square Wave
We will use numerical integration to compute the spectrum of the square wave
SW (t) = | sin(3t)|/ sin(3t).
1. Choose a method from: midpoint, trapezoid, Simpson’s.
2. Use your method to numerically integrate P(w) for w = 1..100.
3. Use Maple to create a list Spectrum of lines [[k, 0], [k, P(k)]] for k = 1..100.
⇥ ⇤
4. Graph the list with plot Spectrum , thickness = 3 .
5. Change the numerical integration method to either Gauss quadrature or
Gauss-Kronrod quadrature.
6. Recompute the power spectrum and its graph. Do the spectrum values
change? Is it faster or slower?

Extra for Experts


1. Show that a(k) = 0 for k 1. (Hint: Integrals of even & odd functions.)
2. Then conclude that P(k) = |b(k)| for k 1.
3. Last, demonstrate that b(k) = 0 unless k = 3 (mod 6).26
ICM 197 – 201
26 Extended Project: Determine this condition for SWn (t) = | sin(nt)|/ sin(nt).
Spline Fit to a Transition Curve
Transition Curves
A transition curve is a section of highway or railroad track used to go from a
straight section into a curve. A transition curve is designed to prevent sudden
changes in lateral acceleration, which would occur with a circular segment, by
smoothly transitioning into and out of a curve. The basic transition curve
parametric function T (t) uses the Fresnel sine and cosine integrals
Z t ⇣ ⌘ Z t ⇣ ⌘
1 2 1 2
C(t) = cos 2pt dt and S(t) = sin 2pt dt
0 0

to give T (t) = [C(t), S(t)]. These integrals do not have elementary antideriv-
atives, so must be evaluated numerically.
A segment from the spiral T forms a transition
curve which will provide a smooth transition without
a sudden change in lateral acceleration.
Graph C and S to see their respective behaviors.

This project will investigate using splines to


lay out a roadbed.

ICM 198 – 201


Spline Fit to a Transition Curve, II
Spline Fitting a Transition Curve for a Roadbed
Two roads are perpendicular and need to be con-
nected by a curve. We’ll blend two transition curves
to make the road. One curve will start at A and
turn towards the north until B. The second transi-
tion curve will start at C and turn west meeting the
first at B. The two curve’s tangents match at B.
We will compute waypoints along the centerline us-
ing the two transition curves, then fit a cubic spline
through the data-points to layout the roadbed to
make surveying and construction reasonable. Use
the data table below to determine the cubic spline.

Transition Curve Coordinates


A = (0.0, 0.0) (0.18, 0.003) (0.35, 0.02)
(0.52, 0.08) B = (0.66, 0.18) (0.75, 0.30)
(0.80, 0.47) (0.82, 0.65) C = (0.83, 0.82)

ICM 199 – 201


References
Selected References and Further Reading
• Brian Bradie, A Friendly Introduction to Numerical Analysis,
Pearson, 2006
• Richard Burden, J. Douglas Faires, & Annette Burden, Numerical
Analysis, 10th ed, Cengage Learning, 2015
• Laurent Demanet, Introduction to Numerical Analysis,
MIT OpenCourseWare, 2012
• Francis Hildebrand, Introduction to Numerical Analysis, 2nd ed,
Dover Publications, 1987
• James P Howard, II, Computational Methods for Numerical Analysis
with R, CRC Press, 2017

• Bookauthority’s “100 Best Numerical Analysis Books of All Time”

ICM 200 – 201


References
Selected Websites
• Wikipedia’s “Computational mathematics”
• Springer’s ”Encyclopedia of Mathematics” “Computational
mathematics”
• Simplicable’s “What is Computational Mathematics?”

• University of Waterloo’s “Welcome to Computational Mathematics”


• Council on Undergraduate Research’s “Computational Mathematics:
An Opportunity for Undergraduate Research” (pdf)

• Perform a Google Scholar search for computational mathematics


• Search on Amazon.com for computational mathematics textbooks

ICM 201 – 201


The End

Whew. After all that, it’s time to sit down. . .

You might also like