Introduction To Computational Mathematics - An Outline
Introduction To Computational Mathematics - An Outline
Computational Mathematics:
An Outline
William C. Bauldry
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any
form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise,
except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without
the prior written permission of the Author.
10 9 8 7 6 5 4 3 2 1
Cover photos: Mount Field National Park, Tall Trees Walk, Tasmania © WmCB, 2018.
Introduction to Computational Mathematics: An Outline
“1 + 1 = 3
for large enough
values of 1.”
Note: Light blue text links inside this Outline; grey text links to a web page. ICM i
Introduction to Computational Mathematics: An Outline
On two occasions I have been asked, “Pray, Mr. Babbage, if you put into the
machine wrong figures, will the right answers come out?” ... I am not able
rightly to apprehend the kind of confusion of ideas that could provoke such a
question.” — Charles Babbage Passages from the Life of a Philosopher, p. 67.
ICM ii
Preface: Is Computation Important?
A simple computational error can cause very serious problems.
Thanks go to the students of MAT 2310 who lived through the development of
both a new course and these slides. Many thanks also go to my colleagues Greg
Rhoads, René Salinas, and Eric Marland who co-designed the course and gave
great feedback during the adventure.
— WmCB, Jan 2020
ICM v
I. Computer Arithmetic
Sections
1. Scientific Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2. Converting to Di↵erent Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3. Floating Point Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4. IEEE-754 Floating Point Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5. Maple’s Floating Point Representation . . . . . . . . . . . . . . . . . . . . . . . . . 18
6. Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
ICM 1 – 201
I. Computer Arithmetic: Scientific Notation
Definitions of Scientific Notation
Normalized: Any numeric value can be written as
d0 .d1 d2 d3 . . . dn ⇥ 10 p
where 1 d0 9.
Engineering: Any numeric value can be written as
n.d1 d2 d3 . . . dm ⇥ 10q
where 1 n 999 and q is a multiple of 3.
Terminating Expansions?
When does a fraction’s expansion terminate?
n n
Base 10: A decimal fraction terminates when r = = p p.
10 p 2 ·5
m
Base 2: A binary fraction terminates when r = p .
2
Examples
1
1. 10 = 0.110 = 0.0 00112
1
2. 3 = 0.310 = 0.012
p . .
3. 2 = 1.414 213 562 373 095 048 810 = 1.0110 1010 0000 1001 1112
. .
4. p = 3.141 592 653 589 793 238 510 = 11.0010 0100 0011 1111 012
ICM 5 – 201
Conversions
Advantages Disadvantages
• Eliminates some • Fewer numbers per 8
repeating expansions bits (100/256 ⇡ 39%)
• Rounding is simpler • Complicated arithmetic
• Displaying values is routines
easier • Slower to compute
Nearly all calculators use BCD formats.
ICM 7 – 201
Floating Point Numbers
Definition (Floating Point Representation)
A number x is represented (and approximated) as
.
x = s ⇥ f ⇥be p
where
s : sign ±1, f : mantissa, b : base, usually 2, 10, or 16
e: biased exponent (shifted), p: exponent’s bias (shift)
The standard floating point storage format is
s e f
Exponent Bias
The bias value is chosen to give equal ranges for positive and negative
exponents without needing a sign bit. E.g., for an exponent with
• 8 bits: 0 e 255 = 28 1. Use p = 28/2 1 gives an exp range of
127 e 127 128.
• 11 bits: 0 e 2047 = 211 1. Use p = 211/2 1 = 1023 gives
1023 e 1023 1024.
ICM 8 – 201
Samples
Examples
1. 3.95 = ( 1)1 ⇥ 0.1234375 ⇥ 221 16
ICM 9 – 201
IEEE Standard for Floating-Point Arithmetic
Definition (IEEE-754)
Normalized Floating Point Representation (Binary)
.
Single precision: x = ( 1)s ⇥ (1. + f[23] ) ⇥ 2e[8] 127 (32 bit)
1 0 1 1 1 1 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
31 30 23 22 0
.
Double precision: x = ( 1)s ⇥ (1. + f[52] ) ⇥ 2e[11] 1023
(64 bit)
1 0 1 1 1 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
63 62 52 51 32
1 0 1 1 1 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
31 0
normal number
f =0 n=( 1)s ⇥ 0.0
e=0 all bits are zero signed zero
f 6= 0 n = ( 1)s ⇥ 2 126 ⇥ 0. f
ICM 11 – 201
Big and Small & Gaps
IEEE-754 Largest and Smallest Representable Numbers
Precision Digits Max Exp Smallest # Largest #
Single ⇡9 ⇡ 38.2 ⇡ 1.18 · 10 38 ⇡ 3.4 · 1038
Double ⇡ 17 ⇡ 307.95 ⇡ 2.225 · 10 307 ⇡ 1.798 · 10308
under- under-
overflow usable range usable range overflow
flow flow
ICM 12 – 201
Machine Epsilon
Definition
The machine epsilon e is the largest value such that
1+e = 1
for a given numeric implementation.
ICM 13 – 201
Machine Epsilon, II
Example (Double Precision [using Java ])
wmcb:I cat machineEpsilonD.java
class mEpsD {
public static void main(String[] args) {
double machEps = 1.0d;
do {
machEps /= 2.0d;
} while ((double)(1.0 + (machEps/2.0)) != 1.0);
System.out.println("Calculated machine epsilon: " + machEps);
}
}
wmcb:I javac machineEpsilonD.java
wmcb:I java mEpsD
Calculated machine epsilon: 2.220446049250313E-16 =) ed ⇡ 2.22 · 10 16
ICM 14 – 201
Machine Epsilon, III
Example (Using Python 3)
>>> macEps = 1.0
>>> while (1.0 + macEps) != 1.0:
macEps = macEps/2.0
>>> macEps
1.1102230246251565e-16
>>>
>>> import numpy as np
>>> macEpsL = np.longdouble(1.0)
>>> while (1.0 + macEpsL) != 1.0:
macEpsL = macEpsL/2.0
>>> macEpsL
5.42101086242752217e-20
>>>
>>> np.finfo(np.longdouble)
finfo(resolution=1.0000000000000000715e-18,
min=-1.189731495357231765e+4932, max=1.189731495357231765e+4932,
dtype=float128)
ICM 16 – 201
Properties
Multiplication is commutative:
n1 ⇥ n2 = n2 ⇥ n1
ICM 17 – 201
Maple’s Floating Point Representation
ICM 18 – 201
Floating Point Rounding
• Round to nearest, ties away from zero (used by Maple and Matlab)
Directed Roundings
• Round toward 0 — truncation
ICM 19 – 201
1 See EE Times’ “Rounding Algorithms”
Error
Defining Error
Absolute Error: The value errabs = |actual approximate|
|actual approximate| errabs
Relative Error: The ratio errrel = =
|actual| |actual|
A mature African bush elephant normally weighs about 6.5 tons. Suppose a
zoo-keeper makes an error of 50 # (⇡ 7 yr old boy ) weighing an elephant. The
relative error is
50 # .
errrel = = 0.4%
13000 #
ICM 20 – 201
Error Accumulates
Adding Error
Add 1 + 12 + 13 + 14 + · · · + 1016 forwards and backwards with 6 digits.
Maple
Digits := 6:
N:= 106 :
Sf := 0; Sb := 0;
for i from 1 to N do for j from N to 1 by -1 do
Sf := Sf +(1.0/i); Sb:= Sb +(1.0/j);
end do: end do:
Sf; Sb;
10.7624 14.0537
106
1
The correct value of Âk to 6 significant digits is 14.3927 .
k=1
ICM 21 – 201
Error Accumulates
Subtracting Error
Solve for x: 1.22x2 + 3.34x + 2.28 = 0 (3 digit, 2 decimal precision)
p
2
b ± b 4ac
The quadratic formula r± = can lead to problems.
2a
Using the formula directly: But the exact roots are:
p
b2 = 11.2 167 ± 73
R± =
4ac = 11.1 122
p .
b2 4ac = 0.32 = 1.30, 1.44
r+ , r = 1.24, 1.50
The relative error is ⇡ 5%.
“Rationalize the numerator” to eliminate a bad subtraction:
p
b b2 4ac 2c
R = = p
2a b + b2 4 ac
ICM 22 – 201
More Error Accumulates
Even Worse Subtraction Error
Solve for x: 0.01x2 1.00x + 0.02 = 0 (3 digit, 2 decimal precision)
Again using the quadratic But the real roots are:
formula directly: .
. R± = 99.98, 0.02
4ac = 0.0008 = 0.00
p .
b2 4ac = 1.00 The relative errors are
.
r± = 100., 0.00 errrel ⇡ 0.02% & 100% !
Again, “rationalize the numerator” to eliminate a bad subtraction:
p
b b2 4ac 2c
R = = p
2a b + b2 4 ac
p
b b2 4ac . 2c .
= 0.00 but p = 0.02
2a 2
b + b 4 ac
ICM 23 – 201
‘Accuracy v Precision’
‘On Target’ v ‘Grouping’
Accuracy: How closely computed values agree with the true value.
Examples
6 5 11 ?
• Roundo↵ 10 + 10 = 10 () 1. + 1. = 1.
4 5 6 7 8 ?
10 + 10 + 10 + 10 + 10 = 3 () 0. + 1. + 1. + 1. + 1. = 3.
•
q 2k 1 1 3
• Truncation sin(q ) = Â (2k 1)! () sin(q ) ⇡ q 6q
k=1
• n n
tan(q ) = Â ( 1)n B2n 4(2n)!
(1 4 ) 2n
q 1 () tan(q ) ⇡ q + 13 q 3
k=1
ICM 25 – 201
Landau Notation
“Big-O”
We use Landau’s notation to describe the order of terms or functions:
Big-O: If there is a constant C > 0 such that | f (x)| C · |g(x)| for
all x near x0 , then we say f = O(g) [that’s “ f is ‘big-O’
of g”].2
Examples
1. For x near 0, we have sin(x) = O(x) and sin(x) = x + O(x3 ).
2. If p(x) = 101x7 123x6 + x5 15x2 + 201x 10, then
• p(x) = O(x7 ) as x ! •.
• p(x) = O(1) for x near 0.
ICM 26 – 201
2 Link to I Further big-O info.
Exercises, I
Problems
Scientific Notation Floating Point Numbers
1. Convert several constants at 7. Convert 31.387510 to floating
NIST to engineering notation. point format with base b = 10
and bias p = 49.
Converting Bases
8. Convert from floating point
2. Convert to decimal: 101110, format with base b = 2 and
101 ⇥ 210, 101.0111, 1110.0 01 bias p = 127:
3. Convert to binary (to 8 1 12610 514110
places):
p 105 , 1/7, 1234.4321,
9. Why is the gap between
p, 2.
successive values larger for
4. Express 831.22 in BCD form. bigger numbers when using a
fixed number of digits?
5. Write the BCD number
1001 0110 0011.1000 0101 in 10. Give an example showing that
decimal. floating point arithmetic is not
distributive (mult over add).
6. Investigate converting bases
by using synthetic division.
ICM 27 – 201
Exercises, II
Problems
IEEE-754 Standard Error
11. Write 20/7 in single precision 17. The US Mint specifies that
format. In double precision. quarters weigh 5.670 g. What
is the largest acceptable
12. Convert the single precision # weight, if the relative error
0 10000111 0010010...0 must be no more than 0.5%?
to decimal.
18. Find the relative error when
13. Chart double precision bit adding 1 + 12 + · · · + 1015 using
patterns. 5 digit arithmetic.
14. Describe a simple way to test 19. Show that cos(x) = O(1) for x
if a computation result is near 0.
either infinite or NaN.
20. Let p be a polynomial with
15. What is the purpose of using n = degree(p). Find k so that
round to nearest, ties to even? a. p(x) = O(xk ) as x ! •.
b. p(x) = O(xk ) for x ⇡ 0.
16. Explain the significance of the
machine-epsilon value.
ICM 28 – 201
II. Control Structures
Sections
1. Control Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2. A Common Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3. Control Structures Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1. Excel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2. Maple [ Sage / Xcas ] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3. MATLAB [ FreeMat / Octave / Scilab ] . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4. C and Java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5. TI-84 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6. R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
7. Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3 Is
there a di↵erence between not true and false? See, e.g., Intuitionistic Logic at
ICM 30 – 201
the Stanford Encyclopedia of Philosophy.
Types of Conditional Statements
Basic Conditional Types
Simple: IF condition THEN do action for true
IF condition THEN do action for true ELSE do action for false
ICM 31 – 201
4 In 2014, NC converted to a regressive “flat tax” currently at 5.25% (2019).
Types of Loops
Loop Types
Counting Loops: For loops perform an action a pre-specified number of
times.
Examples
• For each employee, calculate their monthly pay.
• For each integer i less than n, compute the ith number in the
Fibonacci sequence.
• While the current remainder in the Euclidean algorithm is greater
than 1, calculate the next remainder.
• While the game isn’t over, process the user’s input.
ICM 32 – 201
Example: Collatz Flow Chart
‘Collatz’ Function
• Start with an integer Collatz(n))
conditional inside.
j):=)j)+)1#
ICM 33 – 201
Example: Collatz — A Loop and a Conditional
Pseudo-Code
We see a conditional loop in the There is an ‘if’ statement inside the
Collatz function’s flow chart: loop to calculate the new term:
ICM 35 – 201
Control Structures Example: Flowchart
print: k is odd
through the integers and
j = j+1
an if-then conditional
to test for even or odd. Is True
j≤5 ?
False
Stop
ICM 36 – 201
Control Structures Examples: Diagram
Start%
Loop% STOP%
START% UPDATE%
False!
j!=!1% j!=!j!+!1! Is!j%≤!5?!
Statements!
k!=!j2%
True%
False!
Is!
print!
Statements% k!mod!2!!
k!is!odd!
=!0?!
True%
print!
k!is!even!
End%
ICM 37 – 201
Excel Control Structures
Conditional Statements
If: = IF(condition, true action, false action)
[Can nest up to 7 IFs: =IF(condition1 , IF(condition2 , . . . , . . . ), . . . )]
[But I’ve nested 12 deep without problems...]
Note
Many versions of Excel include Visual Basic for Applications (VBA), a small
programming language for macros. VBA includes a standard if-then-
else/elseif-end if structure. (See the Excel Easy web tutorial.)
ICM 38 – 201
Excel Control Structures
Loop Structures
For: N/A (must be programmed in VBA)
View:
• Excel (Mac) website • Excel (Win) website
ICM 39 – 201
Excel Control Structures Example
ICM 40 – 201
Maple Control Structures
Conditional Statements
If: if condition then statements;
else statements;
end if
ICM 41 – 201
Maple Control Structures
Loop Structures
For: for index from start value |1 to end value
by increment |1 do
statements;
end do
for index in expression sequence do
statements;
end do
While: while condition do
statements;
end do
View:
• Maple website • Maple Online Help
(See also: the Sage, Xcas, and TI Nspire, Maxima, or Mathematica’s website.)
ICM 42 – 201
Maple Control Structures Example
ICM 43 – 201
MATLAB1 Control Structures
Conditional Statements
If: if condition; statements; else statements; end
if condition; statements; elseif condition; statements;
else statements; end
1 FreeMat, Octave, and Scilab are FOSS clones of MATLAB. Also see GDL and R.
ICM 44 – 201
MATLAB / Octave / Scilab Control Structures
Loop Structures
For: for index = startvalue:increment:endvalue
statements
end
View:
• MATLAB website • Scilab website
• Octave website • FreeMat website
ICM 45 – 201
MATLAB Control Structures Example
octave-3.4.0:1>
> for j = 1:1:5;
> k = j*j;
> if mod(k,2)== 0;
> printf("%d is even\n", k);
> else
> printf("%d is odd\n", k);
> end; % of if
> end; % of for
1 is odd
4 is even
9 is odd
16 is even
25 is odd
octave-3.4.0:2>
ICM 46 – 201
C / Java Control Structures
Conditional Statements
If: if (condition) {statements}
if (condition) {statements}
else {statements}
Loop Structures
For: for (initialize; test; update ) {statements}
View:
• C reference card • Java reference card
ICM 48 – 201
C Control Structures Example
return 0;
}
ICM 49 – 201
Java Control Structures Example
}
}
ICM 50 – 201
TI-84 Control Structures
Conditional Statements
If: If condition: statement
If condition
Then
statements
Else
statements
End
ICM 51 – 201
TI-84 Control Structures
Loop Structures
For: For(index, start value, end value [, increment])
statements
End
View:
• TI Calculator website • TI-84 Guidebook links
ICM 52 – 201
TI-84 Control Structures Example
ICM 53 – 201
R Control Structures
Conditional Statements
If: if(condition) {statements}
if(condition)
{statements}
else
{statements}
ICM 54 – 201
R Control Structures
Loop Structures
For: for (variable in sequence)
{statements}
View:
• The R Project for Statistical • The Comprehensive R Archive
Computing homepage Network — CRAN
ICM 55 – 201
R Control Structures Example
ICM 56 – 201
Python Control Structures
Conditional Statements
if if(condition) {statements}
if(condition)
{statements}
else
{statements}
ICM 57 – 201
Python Control Structures Example
1 is odd
4 is even
9 is odd
16 is even
25 is odd
>>>
ICM 58 – 201
From Code to a Flow Chart
Maple Loops
Build flow charts for the Maple code shown below:
H Algorithm 1. H Algorithm 2.
n := 12; n := 12;
r := 1; R := n;
for i from 2 to n do j := n 1;
r := r · i; while j > 1 do
end do: R := R · j;
r; j := j 1;
end do:
R;
ICM 59 – 201
Exercises, I
Problems
1. Write an If-Then-ElseIf statement that calculates tax for a married
couple filing jointly using the 2011 NC Tax Table (before the “Flat Tax”).
a. In natural language d. In Matlab (Octave or Scilab)
b. In Excel e. In C (Java, python, or R)
c. In Maple f. On a TI-84
3. Write code that, given a positive integer n, prints the first n primes.
4. Give a Maple version of Euler’s method.
5. Write nested for loops that fill in the entries of an n ⇥ n Hilbert matrix
a. In Maple b. On a TI-84
Problems
8. Make a flow chart for implementing the Euclidean algorithm to find
the GCD of two positive integers p and q.
9. Write code using the Euclidean algorithm to find the GCD of two
positive integers p and q.
10. Write a Maple or Matlab function that applies the Extended
Euclidean algorithm to two positive integers p and q to give the
greatest common divisor gcd(p, q) and to find integers a and b such
that a p + b q = gcd(p, q).
11. a. Make a flow chart for the Maple code shown in Flow Chart Problem
worksheet.
b. What does the code do?
c. Convert the Maple statements to
i. Matlab
ii. TI-84+
ICM 61 – 201
Exercises, III
Problems
12. The “9’s-complement” of a number x is the value needed to add to x to
have 9’s. Eg. the 9’s-complement of 3 is 6; of 64 is 35; etc.
a. Write a statement to calculate the 9’s-complement of an n digit number y; call the
result y9 .
b. Write an if-then statement that performs carry-around: if the sum of two n digits
numbers has an n + 1st carry digit, drop that digit and add 1 to the sum.
c. Let r > s be two n digit integers. Find s9 with a. Now perform carry-around on (r + s9 )
with b.
d. What simple arithmetic operation is equivalent to the result of c?
ICM 62 – 201
Quick Reference Cards
• MATLAB • Wikiversity’s
Control Structures
• Scilab
• C
• Octave • Java
• TI-84 +: Algebra; • R
Trigonometry • Python 2.7, Python 3.2
ICM 63 – 201
S I. Special Topics: Computation Cost and Horner’s Form
Sections
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2. Horner’s Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
ICM 64 – 201
Special Topic: Computation Cost & Horner’s Form
Introduction to Cost
The arithmetic cost of computation is a measure of how much
‘mathematical work’ a particular expression takes to compute. We will
measure an expression in terms of the number of arithmetic operations it
requires. For example, we’ll measure the cost of computing the
expression
sin(2x4 + 3x + 1)
as
2 additions + 5 multiplications + 1 function call
for a total of 7 arithmetic operations plus a function call.
At a lower level, the time cost of a cpu instruction is the number of clock
cycles taken to execute the instruction. Current CPU’s5 are measured in
FLoating-point OPerations per Second or FLOPS. For example, the
eight-core Intel® Core™ i9 processor used in an iMac (19/2019) can
achieve over 235 gigaFLOPS = 1011 floating-point operations per second.
ICM 65 – 201
5 Current in early 2020, that is. See Moore’s Law.
Horner’s Form
Partial Factoring
William Horner studied solving algebraic equations and efficient forms for
computation. Horner observed that partial factoring simplified a polynomial
calculation. Consider:
Standard Form , Horner’s Form
1 + 2x = 1 + 2x
| {z } | {z }
1 add+1 mult 1 add+1 mult
2
1 + 2x + 3x = 1 + x · (2 + 3x)
| {z } | {z }
2 add+3 mult 2 add+2 mult
1 + 2x + 3x2 + 4x3 = 1 + x · (2 + x · [3 + 4x])
| {z } | {z }
3 add+6 mult 3 add+3 mult
2 3 4
1 + 2x + 3x + 4x + 5x = 1 + x · (2 + x · [3 + x · (4 + 5x)])
| {z } | {z }
4 add+10 mult 4 add+4 mult
2 3 4 5
1 + 2x + 3x + 4x + 5x + 6x = 1 + x · (2 + x · [3 + x · (4 + x · [5 + 6x])])
| {z } | {z }
5 add+15 mult 5 add+5 mult
Two Patterns
If p(x) is an nth degree polynomial, the cost of computation in standard
form is O(n2 ). Using Horner’s form reduces the cost to O(n).
ICM 67 – 201
Further Reductions
Chebyshev’s Polynomials
Pafnuty L. Chebyshev worked in number theory, approximation theory,
and statistics. The special polynomials named for him are the Chebyshev
Polynomials Tn (x) that have many interesting properties. For example, Tn
is even or odd with n, oscillates between 1 and 1 on the interval [ 1, 1],
and also has all its zeros in [ 1, 1]. The Horner form of Tn is quite
interesting. Let u = x2 , then:
3x + 4x3 () x( 3 + 4x2 ) = x( 3 + 4u)
ICM 68 – 201
Exercises, I
Problems
1. Make a flow chart for evaluating a polynomial using Horner’s form.
2. Write Maple or Matlab code implementing Horner’s form.
3. How does synthetic division relate to Horner’s form?
4. Write a Maple or Matlab function that performs synthetic division
with a given polynomial at a given value.
5. Calculate the number of additions and multiplications required for
evaluating an nth degree polynomial
a. in standard form. b. in Horner’s form.
c. Look up the sequence {0, 2, 5, 9, 14, 20, 27, 35, 44, . . . } at The On-Line
Encyclopedia of Integer Sequences.
Sections
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2. Taylor’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3. Di↵erence Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
i. Forward Di↵erences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
ii. Backward Di↵erences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
iii. Centered Di↵erences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
ICM 70 – 201
III. Numerical Di↵erentiation
Taylor series expansions will be our basic tool for developing formulas and
error bounds for numerical derivatives. The errors will have two main
components: truncation errors from Taylor polynomials and round-o↵
errors from finite-precision floating-point arithmetic.
ICM 71 – 201
Taylor’s Theorem
ICM 72 – 201
6 Actually, discovered by Gregory in 1671, ⇡ 14 years before Taylor was born!
Proving Taylor’s Theorem
ICM 73 – 201
Tailored Expressions
Forms of the Remainder
f (n+1) (c)
Lagrange (1797): Rn (x) = (n+1)! (x a)n+1
f (n+1) (c)
Cauchy (1821): Rn (x) = n! (x c)n (x a)
Z x
Integral Form: Rn (x) = 1
n! f (n+1) (t)(x t)n dt
a
|x a|n+1
Uniform Form: |Rn (x)| (n+1)! · max f (n+1) (x) = O |x a|n+1
f (a + h) f (a) O(h2 )
= f 0(a) +
h h
The Forward Di↵erence Formula is
f (a + h) f (a)
f 0(a) = + O(h) (FD)
h
Examples
1. Suppose f (x) = 1 + x esin(x) . For a = 0 and h = 0.1, we have
f 0 (x) ⇡ (1.1105 1.0000)/0.1 = 1.1050
f (a h) f (a) O(h2 )
= f 0(a) +
h h
The Backward Di↵erence Formula is
f (a) f (a h)
f 0(a) = + O(h) (BD)
h
Examples
1. Again, suppose f (x) = 1 + x esin(x) . For a = 0 and h = 0.1, we have
f 0 (x) ⇡ (1.0000 0.910)/0.1 = 0.900
Example
1. Once more, suppose f (x) = 1 + x esin(x) . For a = 0 and h = 0.1, we
have
f 0 (x) ⇡ (1.110 0.910)/0.2 = 1.000
ICM 77 – 201
The Chart
A Table of Di↵erences From a Function
Let f (x) = 1 + x esin(x) and a = 1.0. Then
f 0 (1) = esin(1) (1 + cos(1)) ⇡ 3.573157593
ICM 78 – 201
Another Chart
A Table of Di↵erences From Data
Estimate the derivatives of a function given the data below (h = 0.4).
xi 2.00 1.60 1.20 0.80 0.40 0.00 0.40 0.80 1.20 1.60 2.00
yi 1.95 0.29 0.56 0.81 0.65 0.30 0.06 0.21 0.04 0.89 2.55
Forward Di↵erences
xi 2.00 1.60 1.20 0.80 0.40 0.00 0.40 0.80 1.20 1.60 2.00
yi 4.16 2.12 0.625 0.375 0.90 0.90 0.375 0.625 2.12 4.16 ./
Backward Di↵erences
xi 2.00 1.60 1.20 0.80 0.40 0.00 0.40 0.80 1.20 1.60 2.00
yi ./ 4.16 2.12 0.625 0.375 0.90 0.90 0.375 0.625 2.12 4.16
Centered Di↵erences
xi 2.00 1.60 1.20 0.80 0.40 0.00 0.40 0.80 1.20 1.60 2.00
yi ./ 3.138 1.374 0.125 0.637 0.90 0.6375 0.125 1.374 3.138 ./
ICM 79 – 201
Appendix I: Taylor’s Theorem
Methodus Incrementorum Directa et Inversa (1715)1
In 1712, Taylor wrote a letter containing his theorem without proof to
Machin. The theorem appears with proof in Methodus Incrementorum as
Corollary II to Proposition VII. The proposition is a restatement of
“Newton’s [interpolation] Formula.” Maclaurin introduced the method
(undet coe↵s; order of contact) we use now to present Taylor’s theorem in
elementary calculus classes in A Treatise of Fluxions (1742) §751.
E.g., the third derivative’s centered di↵erence approximation with second-order accuracy is
1
f (x0 2h) + 1 f (x0 h) + 0 f (x0 ) 1 f (x0 + h) + 12 f (x0 + 2h)
f (3)(x0 ) ⇡ 2
+ O h2 .
h3
Problems
1. Show that the centered di↵erence formula is the average of the forward
and backward di↵erence formulas.
2. Explain why the centered di↵erence formula is O(h2 ) rather than O(h).
3. Add O(h4 ) versions of Eqs (1) and (2) to find a centered di↵erence
approximation to f 00 (a).
4. Investigate the ratio of error in the function’s di↵erence chart as h is
successively divided by 2 for
a. forward di↵erences
b. backward di↵erences
c. centered di↵erences
5. Examine the ratios of error to h in the data di↵erence chart for
a. forward di↵erences
b. backward di↵erences
c. centered di↵erences
ICM 82 – 201
Exercises, II
Problems
6. Derive the 5-point di↵erence formula for f 0 (a) by combining Taylor
expansions to O(h5 ) for f (a ± h) and f (a ± 2h).
7. Write a Maple or Matlab function that uses the backward di↵erence
formula (BD) in Euler’s method of solving di↵erential equations.
8. Collect the temperatures (with a CBL) in a classroom from 8:00 am
to 6:00 pm.
a. Estimate the rate of change of temperatures during the day.
b. Compare plots of the rates given by forward, backward, and centered
di↵erences.
9. a. Find Taylor expansions for sin and cos to O(x6 ). Estimate cos(1.0).
d
b. Since dx sin(x) = cos(x), we can estimate cos with the derivative of
sin. Use your expansion of sin and h = 0.05 to approximate cos(1.0)
with
i. forward ii. backward iii. centered
di↵erences di↵erences di↵erences
Discuss the errors.
ICM 83 – 201
IV. Root Finding Algorithms
Sections
1. The Bisection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85
2. Newton-Raphson Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3. Secant Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4. Regula Falsi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
ICM 84 – 201
IV. Root Finding Algorithms: The Bisection Method
ICM 85 – 201
The Bisection Error
Theorem (Bisection Algorithm)
Let [a, b] be an interval on which f changes sign. Define
xn = cn = 12 (an 1 + bn 1 )
with [an , bn ] chosen by the algorithm. Then f has a root a 2 [a, b], and
1 n
|a xn | (b a) · 2
ICM 86 – 201
1 A-L Cauchy, Exercices de mathématiques, De Bure frères, Paris (1829).
The Bisection Method Algorithm
2. Calculate c = 12 (ak + bk )
ICM 87 – 201
The Bisection Method Pseudocode
input : a, b, eps
extern: f(x)
1: fa f(a);
2: fb f(b);
3: if fa ·fb >0 then
4: stop ; /* Better: sign(fa) 6= sign(fb) */
5: n ceiling((log(b a) log(eps))/log(2));
6: for i 1 to n do
7: c a + 0.5·(b a);
8: fc f(c);
9: if abs(fc)<eps then
return: c
10: if fa ·fc <0 then
11: b c;
12: fb fc;
13: else
14: if fa ·fc >0 then
15: a c;
16: fa fc;
17: else
return: c
return: c
ICM 88 – 201
Newton-Raphson Method
Newton-Raphson Method8
If a function f is ‘nice’, use the tan-
3
x1 = 3.56
find — approximates the root of f . -1.6 -0.8 0 0.8 1.6 2.4 3.2 4 4.8 5.6
-1
-2
2
x0 = 2.6
f (a)
x=a 1
f (x)
-1
-3
xn+1 = N(xn )
8 The general method we use was actually developed by Simpson; Newton worked
ICM 89 – 201
with polynomials; Raphson iterated the formula to improve the estimate of the root.
Newton’s Method Error
Theorem
Let f 2 C 2 (I) on some interval I ⇢ R. Suppose a 2 I is a root of f .
Choose x0 2 I and define
f (xn )
xn+1 = xn .
f 0 (xn )
Then
|a xn+1 | M · |a xn |2
or, with en = |a xn |,
en+1 M · en2
where M is an upper bound for 12 | f 00 (x)/ f 0 (x)| on I.
ICM 90 – 201
Newton’s Method Algorithm
2. Compute xk = N(xk 1 ).
ICM 91 – 201
Newton’s Method Pseudocode
1: fx0 f(x0);
2: dfx0 df(x0);
3: for i 1 to n do
4: x1 x0 fx0/dfx0;
5: fx1 f(x1);
6: if |fx1| + |x1 x0| <eps then
7: return: x1
8: x0 x1;
9: fx0 fx1;
10: dfx0 df(x0);
return: x1
ICM 92 – 201
Secant Method
Secant Method
Newton’s method requires evaluating the derivative — this can be from
difficult to impossible in practice. Approximate the derivative in Newton’s
method with a secant line9 :
f (xn )
xn+1 = xn f (x ) f (x )
n n 1
xn xn 1
xn xn 1
= xn f (xn ) ·
f (xn ) f (xn 1)
3.2 3.2
2.4 2.4
1.6 1.6
0.8 0.8
0.5 1 1.5
x1 2 2.5
x0
3 3.5 4 0.5 1 1.5
x2 2
x1
2.5
x0
3 3.5 4
-0.8 -0.8
9 Historically, the methods developed the opposite way: Viète used discrete steps of 10 k
(1600); Newton used secants (1669), then ‘truncated power series’ (1687); Simpson used ICM 93 – 201
fluxions/derivatives (1740) with ‘general functions.’
Secant Method Error
Theorem
Let f 2 C 2 (I) for some interval I ⇢ R. Suppose a 2 I is a root of f .
Choose x0 and x1 2 I, and define
xn xn 1
xn+1 = xn f (xn ) · .
f (xn ) f (xn 1 )
Then
|a xn+1 | M · |a xn | · |a xn 1 |
or, with en = |a xn |,
en+1 M · en · en 1
1 00 0
where M is an upper bound for 2 | f (x)/ f (x)| on I.
p
1+ 5
This is superlinear convergence of “order 1.6”. (Actually, it’s order 2 )
ICM 94 – 201
Secant Method Algorithm
2. Compute xk = S(xk 1 , xk 2 ).
ICM 95 – 201
Secant Method Pseudocode
return: x1
ICM 96 – 201
Regula Falsi
Regula Falsi
The regula falsi, or ‘false position,’ method10 is very old; the Egyptians
used the concept. The method appears in the Vaishali Ganit (India, 3rd
century BC), Book on Numbers and Computation & Nine Chapters on
the Mathematical Art (China, 2nd century BC), Book of the Two Errors
(Persia, c 900), and came to the west in Fibonacci’s Liber Abaci (1202).
Regula falsi combines the secant and bisection techniques: Use the
secant to find a “middle point,” then keep the interval with a sign
change, i.e., that brackets the root.
x0 x1
1.6 1.6
0.8 0.8
c c
-1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4
x1
-0.8 -0.8
x2
-1.6 -1.6
10 Also called Modified Regula Falsi, Double False Position, Regula Positionum, Secant Method,
ICM 97 – 201
Rule of Two Errors, etc. My favourite name is Yı́ng bù zú: ‘Too much and not enough.’
Regula Falsi Method Error
Theorem
Let f 2 C 2 (I) for some interval I ⇢ R. Suppose a 2 I is a root of f .
Choose a and b 2 I such that sign( f (a)) 6= sign( f (b)), and define
b a
c=b f (b) · .
f (b) f (a)
Then
|a c| M · |a a|
or, with en = |a xn |,
en+1 M · en
where 0 < M < 1 is a constant depending on | f 00 (x)| and | f 0 (x)| on I.
ICM 98 – 201
Regula Falsi Algorithm
2. Compute c = S(a, b)
3. If f (c) ⇡ 0, then c is a root; quit
4. If f (c)· f (a) < 0, then b c
5. else a c
ICM 99 – 201
Regula Falsi Pseudocode
input : a, b, eps, n
extern: f(x)
1: fa f(a);
2: fb f(b);
3: if fa ·fb >0 then
4: stop ; /* Better: sign(fa) 6= sign(fb) */
5: for i 1 to n do
6: c (a·fb b·fa)/(fb fa) ; /* Better: c b - fb*(b-a)/(fb-fa) */
7: fc f(c);
8: if |fc | <eps then
return: c
9: if fa ·fc <0 then
10: b c;
11: fb fc;
12: else
13: if fb ·fc <0 then
14: a c;
15: fa fc;
16: else
return: c
return: c
Halley’s Method
Halley’s method11 of 1694 extends Newton’s method to obtain cubic
convergence. Halley was motivated by de Lagny’s 1692 work showing algebraic
formulas for extracting roots. Using the quadratic term in Taylor’s formula
obtains the extra degree of convergence. Just as Newton did not recognize the
derivative appearing in his iterative method, Halley also missed the connection
to calculus — it was first seen by Taylor in 1712. The version of Halley’s
method we use comes from applying Newton’s method to the auxiliary function
f (x)
g(x) = p .
| f 0 (x)|
Then the iterating function for Halley’s method is
2 f (xn ) f 0 (xn )
xn+1 = xn .
2 f 0 (xn )2 f (xn ) f 00 (xn )
11 For
historical background and a nice development, see T. Scavo and J. Thoo, “On
ICM 101 – 201
the Geometry of Halley’s Method,” Am. Math. Mo., Vol. 102, No. 5, pp. 417-426.
Halley’s Method Error
Theorem
Let f 2 C 3 (I) on some interval I ⇢ R. Suppose a 2 I is a root of f .
Choose x0 2 I and define
2 f (xn ) f 0 (xn )
xn+1 = xn .
2 f 0 (xn )2 f (xn ) f 00 (xn )
Then
|a xn+1 | M · |a xn |3
or, with en = |a xn |,
en+1 M · en3
3 f 00 (x)2 2 f 0 (x) f 000 (x)
where M is an upper bound for 12 f 0 (x)2
on I.
2 f (x) f 0 (x)
1. Set k = 1 and H(x) = x
2 f 0 (x)2 f (x) f 00 (x)
2. Compute xk = H(xk 1 ).
1: fx0 f(x0);
2: dfx0 df(x0);
3: ddfx0 ddf(x0);
4: for i 1 to n do
5: x1 x0 2·fx0·dfx0 / (2·dfx0ˆ2 fx0·ddfxo);
6: fx1 f(x1);
7: if |fx1| + |x1 x0| <eps then
8: return: x1
9: x0 x1;
10: fx0 f(x1);
11: dfx0 df(x1);
12: ddfx0 ddf(x1);
return: x1
Polynomial Root
⇥1 3
⇤
Find the real root of f (x) = x11 + x2 + x + 0.5 in 2, 2 . (r = 1.098282972)
(This is f ’s only real root.)
-1
-2
Computation
Method Error Convergence Speed
Cost
Bisection en+1 12 en Linear (order 1) Low
Regula Falsi en+1 C en Linear (order 1) Medium
Secant method en+1 C en en 1 Superlinear (order ⇡ 1.6) Medium
Newton’s method en+1 C en2 Quadratic (order 2) High
Halley’s method en+1 C en3 Cubic (order 3) Very High
ICM 107 – 201
Appendix III: Rate of Convergence
Terminology
Rate Parameters
Sublinear r = 1 and C = 1
Linear r = 1 and 0 < C < 1
Superlinear r>1
Quadratic r=2
Cubic r=3
NB: Quadratic and cubic are special cases of superlinear convergence.
ICM 108 – 201
Exercises, I
Exercises
For each of the functions given in 1. to 7. below:
a. Graph f in a relevant window.
b. Use Maple’s fsolve to find f ’s root to 10 digits.
c. Use each of the five methods with a maximum of 15 steps filling in the table:
Method Approx Root Relative Error No. of Steps
T (x) = x3 2x 5.
30x 31
1 1 5. R(x) =
2. f (x) = 13 7x x11 29(x 1)
Z x
3. g(x) = sin(t 2 /2) dt 1 sin(x2 ) + 1
6. S(x) = for x 2 [0, 4]
0 cos(x) + 2
✓ ◆
x· ln(x2 )
7. The intransigent function y(x) = 10 · e 1+ x
8. Explain why the bisection method has difficulties with two roots in an interval.
17. [Group Exercise] Redo the previous problem finding all roots in
[ 1, 1] of the 8th Chebyshev polynomial
T8 (x) = 128 x8 256 x6 + 160 x4 32 x2 + 1
Sections
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
2. Modified Newton’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Choose an initial value x0 . Then calculate the values xn+1 = M(xn ) for
n = 1, 2, . . . . The question is: Does xn have a limit?
Convergence?
Use Newton’s example function: y = x3 2x 5. Then
3x2 2
M(x) = x
6x
Starting with x0 = 1 gives the sequence x1 = 0.83̄, x2 = 0.816̄,
x3 = 0.81649659, x4 = 0.81649658. Where these points are going?
ICM 115 – 201
Exercises
Problems
1. Use Maple to generate a random polynomial with:
> randomize(your phone number, no dashes or spaces):
deg := 1+2*rand(4..9)():
p := randpoly(x, degree=deg, coeffs=rand(-2..2)):
p := unapply(sort(p), x);
2. Apply the Modified Newton’s Method to your polynomial with a
selection of starting points.
3. Produce a chart of your intermediate results.
4. Graph your polynomial and the trajectories using your data chart.
5. Can you determine a specific value where the trajectories change
from one to another target point?
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
1.5
0.5
rem. The Greeks studied quadrature: given a -1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
-1
n n n
An ⇡ Â f (xk 1 ) Dxk An ⇡ Â f (mk ) Dxk An ⇡ Â f (xk ) Dxk
k=1 k=1 k=1
mk = 12 (xk 1 + xk )
Trapezoid Sums
Instead of the degree 0 rectangle approximations to the
function, use a linear degree 1 approximation. The area
of the trapezoid is given by
AT = 12 [ f (xk 1) + f (xk )] Dxk
This gives an approximation for the integral
Z b n
a
f (x) dx ⇡ Â 12 [ f (xk 1) + f (xk )] Dxk
k=1
[Midpoint: measure height at average x v. trapezoid: average the height measures]
The formula is often written as
" ! #
n 1
Dxk
Tn ⇡ f (x0 ) + 2 Â f (xk ) + f (xn )
k=1 2
Example
Let f (x) = sin(x) + 12 sin(2x) 1
4
1
sin(4x) + 16 sin(8x) over [0, p].
With an equipartition,
Dx = p/10 ⇡ 0.314
Then
" ! #
10
Dx
T10 = f (0)+ 2 Â f ( 9k p) + f (p)
k=1 2
which gives
T10 = 1.984
Simpson’s Rule
We now move to a degree 2 approximation. The
easiest way to have 3 data pts is to take the pan-
els in pairs: instead of rectangle base [xi , xi+1 ],
use [xi , xi+1 , xi+2 ]. So we require an even number
of panels. The area under the parabola is
h i
AS = 13 f (xi ) + 4 f (xi+1 ) + f (xi+2 ) Dx
Example
Let f (x) = sin(x) + 12 sin(2x) 1
4
1
sin(4x) + 16 sin(8x) over [0, p].
Dx = p/10 ⇡ 0.3141592654
which gives
S10 = 2.000006784
Maple gives
n = 50 n = 500 n = 5000
2 3 2 3 2 3
left 0.0497133 left 0.541336 left 3.42282
6 right 0.04971807 6 right 0.5413367 6 right 3.422827
6 7 6 7 6 7
6 midpoint 3.12102007 6 midpoint 4.0520107 6 midpoint 2.882437
6 7 6 7 6 7
4trapezoid 0.04971575 4trapezoid 0.5413365 4trapezoid 3.422825
Simpson 2.0972500 Simpson 2.881790 Simpson 3.06256
(b a)2
Left end point f (xi ) 2 · M1 · 1n = O(h)
(b a)2
Right end point f (xi+1 ) 2 · M1 · 1n = O(h)
In Search of Improvements
Write the rules we’ve seen as sums:
Left endpt: Ln = 1n f (x0 ) + 1n f (x1 ) + · · · + 1n f (xn 1)
1 1 1
Right endpt: Rn = n f (x1 ) + n f (x2 ) + · · · + n f (xn )
1 1 1
Midpoint: Mn = n f (xm1 ) + n f (xm2 ) + · · · + n f (xmn )
1 1 1 1
Trapezoid: Tn = 2n f (x0 ) + n f (x1 ) + · · · + n f (xn 1 ) + 2n f (xn )
1 4 2 4 1
Simpson’s: Sn = 3n f (x0 ) + 3n f (x1 ) + 3n f (x2 ) + · · · + 3n f (xn 1 ) + 3n f (xn )
15
“Methodus nova integralium valores per approximationem inveniendi,”
ICM 126 – 201
Comment Soc Regiae Sci Gottingensis Recentiores, v. 3, 1816.
Patterns
Observations
• Each of the formulas has the same form: a weighted sum
An = w1 · f (x1 ) + w2 · f (x2 ) + · · · + wn · f (xn )
with di↵erent sets of weights wi and di↵erent sets of nodes xi .
Since we have 2n ‘unknowns’ wi and xi , let’s look for a set that integrates
a 2n 1 degree polynomial exactly. (Remember: a 2n 1 degree polynomial has
2n coefficients.)
ICM 127 – 201
Sampling 3
Example (Third Degree)
Set n = 3. Determine the choice of wi and of xi so that
Z 1 3
1
x p dx = Â wk · xkp
k=1
exactly for p = 0, 1, . . . , 5 = 2 · 3 1.
Random Polynomials
Generate and test a random 5th degree polynomial.
p := unapply(sort(randpoly(x, degree = 5), x), x)
x ! 7x5 + 22x4 55x3 94x2 + 87x 56
G3 := 5/9*p(-sqrt(3/5)) + 8/9*p(0) + 5/9*p(sqrt(3/5))
2488
15
Int(p(x), x = -1..1) = int(p(x), x = -1..1)
Z 1
2488
p(x) dx = 15
1
GK7,15 (1989)
A widely used implementation is based on a Gaussian quadrature with 7 nodes.
Kronrod adds 8 to total 15 nodes.
GK7,15 on [ 1, 1]
Gauss-7 nodes Weights
7 0.00000 00000 00000 0.41795 91836 73469
G7 = Â wk f (xk ) ±0.40584 51513 77397
±0.74153 11855 99394
0.38183 00505 05119
0.27970 53914 89277
k=1
±0.94910 79123 42759 0.12948 49661 68870
15
GK7,15 = Â w j f (x j ) Kronrod-15 nodes Weights
j=1
0.00000 00000 00000 G 0.20948 21410 84728
±0.20778 49550 07898 K 0.20443 29400 75298
e7,15 ⇡ G7 GK7,15 ±0.40584 51513 77397 G 0.19035 05780 64785
±0.58608 72354 67691 K 0.16900 47266 39267
or, in practice, use17 ±0.74153 11855 99394 G 0.14065 32597 15525
⇥ ⇤3/2 ±0.86486 44233 59769 K 0.10479 00103 22250
⇡ 200 G7 GK7,15
±0.94910 79123 42759 G 0.06309 20926 29979
±0.99145 53711 20813 K 0.02293 53220 10529
Example
Z 1
x2
Find e dx.
1
Using Maple gives:
7
G7 = Â wk f (xk ) = 1.49364828886941
k=1
15
GK7,15 = Â wk f (xk ) = 1.49364826562485
k=1
8
e7,15 ⇡ G7 GK7,15 = 2.324456 · 10
See Maple’s Online Help for int/numeric to see the methods available.
ICM 133 – 201
A Class Exercise
Z 1
Z 1 a p a
4 6. x dx
3. dx 0 4
p 2
0 x 4 + 16 a p a+1 + p a+1
(1 4 ) 4 ( )
1 (4 p)4a 1 = 1+a
= tan
+ tan 1 p4a 1
Z +1
1 1
7. p a
dx
Z p 1 1 x2 1 + x + 2
p
4. cos(2a sin(x)) dx = p J0 (2a ) =p
0 (1+2 a )2 1
Z 2 4
Z 1 10
14. U(x) ex/2 dx 16. dx (sharp peak)
(discontinuity ) p 2 8
1
1 x 2 +10
Sections
1. Polynomial Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Definition
Knots or Nodes: The x-values of the interpolation points.
Lagrange Fundamental Polynomial: Given a set of n + 1 knots, define
x xk
Li (x) = ’
k=0..n i xk
x
k 6= i
x x0 x xi 1 x xi+1 x xn
= ⇥···⇥ ⇥ ⇥···⇥
xi x0 xi xi 1 xi xi+1 xi xn
Hx H
( 2) x ( 1) x 0 x 1 x 2
L0 (x) = 2 (H2) · · · ·
H 2 ( 1) 2 0 2 1 2 2
delete
x ( 2) Hx H
( 1) x 0 x 1 x 2
L1 (x) = · 1 (H1) · · ·
1 ( 2) H 1 0 1 1 1 2
delete
x ( 2) x ( 1)
L2 (x) = 0 ( 2) · 0 ( 1) ·@
x 0
0@ 0 ·
x 1
0 1 · x 2
0 2
delete
x ( 2) x ( 1)
L3 (x) = 1 ( 2) · 1 ( 1) · x 0
1 0 ·@
x 1
1@ 1 ·
x 2
1 2
delete
x ( 2) x ( 1)
L4 (x) = 2 ( 2) · 2 ( 1) · x 0
2 0 · x 1
2 1 ·@
x 2
2@ 2
delete
Graph the Lk !
ICM 144 – 201
Sampling a Better Way
Example
Let S = {[ 2, 1], [ 1, 1], [0, 1], [1, 1], [2, 1]}.
x+2 x+1 x 1 x 2
+ (1) · 0+2 · 0+1 · 0 1 · 0 2
x+2 x+1 x 0 x 2
+ ( 1) · 1+2 · 1+1 · 1 0 · 1 2
x+2 x+1 x 0 x 1
+ (1) · 2+2 · 2+1 · 2 0 · 2 1
Compact Expressions
x xk
For a set [xk ] of n + 1 knots, we defined Li (x) = ’ . This formula is
k=0..n i xk
x
k 6= i
computationally intensive.
n
Set w(x) = ’ (x xk ).
k=0
1. The numerator of Li is w(x)/(x xi )
2. The denominator of Li is w(x)/(x xi ) evaluated at xi . Rewrite as
w(x) w(x) w(xi )
= . Take the limit as x ! xi :
x xi x xi
w(x) w(xi )
lim = w 0 (xi )
x!xi x xi
w(x)
Thus Li (x) = . A very compact formula!
(x xi ) w 0 (xi )
More Knots
To decrease the error, use more knots. But . . . all the Lk (x) change.
1. Set {xk } = { 2, 1, 2}. Then
x 1 x 2 1 2 1 1
L0 (x) = · = 12 x 4x+ 6
2 1 2 2
x+2 x 2
L1 (x) = · = 13 x2 + 43
1+2 1 2
x+2 x 1
L2 (x) = · = 14 x2 + 14 x 12
2+2 2 1
Example
n
k3 k
Let f (x) = x3 for x 2 [0, 1]. Then Bn ( f ) = Â n3 n
xk (1 x)n k
k=0
B1 (x) = x B2 (x) = 14 x + 34 x2
B3 (x) = 19 x + 23 x2 + 19 x3 B4 (x) = 1
16
9 2
x + 16 x + 38 x3
p1 B1 ( f )
p4 B4 ( f )
ICM 151 – 201
Newton Interpolation
Now let n
Pn (x) = Â ak Nk (x)
k=0
5
Then P5 (x) = Â [y0 , . . . , yk ]Nk (x)
k=0
125 1 2
Newton: N5 = 6 (x) x 5 x 5
625 1 2 3
8 (x) x 5 x 5 x 5
+ 625
4 (x) x 1
5 x 2
5 x 3
5 x 4
5
Bernstein: B5 = 10 x3 (1 x)2 + 5 x4 (1 x) + x5
Splines
Lagrange and Newton polynomials oscillate excessively when there are a
number of closely spaced knots. To alleviate the problem, use “splines,”
piecewise, smaller-degree polynomials with conditions on their
derivatives. The two most widely used splines:
Bézier splines are piecewise Bernstein polynomials [Casteljau (1959) and
Bézier (1962)].
Cubic B-splines are piecewise cubic polynomials with second derivative
equal to zero at the joining knots [Schoenberg (1946)].
Along with engineering, drafting, and CAD, splines are used in a wide
variety of fields. TrueType fonts use 2-D quadratic Bézier curves.
PostScript and MetaFont use 2-D cubic Bézier curves.
Exercises
For each of the functions given in 1. to 5.:
• Find the Lagrange polynomial of order 6
• Find the Newton polynomial of order 6
• Find the Bernoulli polynomial of order 6
and plot the interpolation polynomial with the function.
x
1. f (x) = sin(2px) on [0, 1] 4. k(x) = 2 on [ 10, 10]
x +1
Z xh p i
2. g(x) = ln(x + 1) on [0, 2] 5. S(x) = sin( 12 t 2 ) 2xp dt
0
3. h(x) = tan(sin(x)) on [ p, p] for x 2 [0, 10]
6. Find an interpolating polynomial for the data given below. Plot the
polynomial with the data.
0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0
4.2 2.2 2.0 8.7 5.7 9.9 0.44 4.8 0.13 6.4
Exercises
7. An error bound for Newton interpolation with n + 1 knots {xk } is
n
| f (x) N(x)| 1
(n+1)!
· max f (n+1) (x) · ’ (x xk )
k=0
Show this bound is less than or equal to the Lagrange interpolation error
bound. How does this make sense in light of the unicity of interpolation
polynomials? (NB: The formula for Newton interpolation also applies to
Lagrange interpolation.)
8. Investigate interpolating Runge’s “bell function” r(x) = e x2 on the
interval [ 5, 5]
a. with 10 equidistant knots.
b. with “Chebyshev knots” xk = 5 cos((n j)p/n) with j = 0..10.
9. Write a Maple function that produces a di↵erence tableau for a data set.
Test your function with the data set produced by
> myData := [seq([k, rand( 9..9)()], k = 1..10)];
Sections
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
19 The
Nspire series uses an ARM processors.
ICM 163 – 201
WabbitEmu is a calculator emulator for Linux, Mac, & Windows.
TI-80 Series Calculators
Timeline of the TI-80 Series
Model Year Z80 Processor RAM KB / ROM MB
TI-81 1990 2 MHz 2.4 / 0
TI-82 1993 6 MHz 28 / 0
TI-83 1996 6 MHz 32 / 0
TI-83 Plus 1999 6 MHz 32 / 0.5
TI-83 Plus SE 2001 15 MHz 128 / 2
TI-84 Plus 2004 15 MHz 128 / 1
TI-84 Plus SE 2004 15 MHz 128 / 2
TI-84 Plus C 2013 15 MHz 128 / 4
TI-84 Plus CE 2015 48 MHz 256 / 4
Floating point types: Real: 0; Complex: 0Ch; (List: 01h; Matrix: 02h; etc.)
EXP: Power of 10 exponent coded in binary, biased by 80h
DD: Mantissa in BCD, 7 bytes of two digits per byte. While the mantissa
has 14 digits, only 10 (+2 exponent digits) are displayed on the screen.
(Many math routines use 9 byte mantissas internally to improve accuracy.)
Examples: 3.14159265 = 00 80 31 41 59 26 50 00 00
230.45 = 80 82 23 04 50 00 00 00 00
1
f (x) dx ⇡ Â f (xk ) · wk .
k=1
10 8 (x p/2)2
4. Define Y1 to be the function f (x) = . Explain the
10 16 + (x p/2)2
results from using solve with an ‘initial guess’ of:
a. 0
b. 1.5
Z 1
1 8/9
5. Define f by f (x) = x 3 . Compare evaluating f (x) dx with
1
a. a TI-84+ SE,
b. Maple.
10. Explain the possible sources of error when the calculator computes
Z 1⇣ ⇣ ⌘ ⌘
d
dt T 10 dX
0 T =X
1. Estimate the derivative of F(x) for student scores that are one standard
deviation above the mean.
2. The minimum score for students scoring in the top 10% is found by
solving 0.10 = 1 F(x). Use a root finding method to find x.
F(a,b,c0)%=%(s,c1)%
Collatz conjectured the sequence would always reach 1 no matter the starting
value n 2 N.
Currently. . .
The conjecture has been verified for all starting values up to 87 · 260 ⇡ 1020 .
Read “The 3x + 1 Problem” by J. Lagarias (January 1, 2011 version) and check
Eric Roosendaal’s web site http://www.ericr.nl/wondrous/.
Parameter Choices
Rotation Mode Vectoring Mode
dk = sgn(zk ) (zk ! 0) dk = sgn(yk ) (yk ! 0)
n n
K = ’ cos(sk ), K 0 = ’ cosh(sk )
j=0 j=0
Project
1. Modify the Maple program CORDIC[Trig] so as to compute arctan(q ).
2. Write a Maple program CORDIC[HyperTrig] that computes hyperbolic
trigonometric functions using CORDIC.
3. Write a Maple program CORDIC[Exp] that computes the exponential
function using CORDIC.
4. Write a Maple program CORDIC[Ln] that computes the logarithmic
function using CORDIC.
5. Report on the complex number basis of the CORDIC algorithm.
6. Create a presentation on the history of the CORDIC algorithm.
The Situation
Commissioner Loeb was murdered in his office. Dr. “Ducky” Mallard,
NCIS coroner, measured the corpse’s core temperature to be 90 F at
8 : 00 pm. One hour later, the core temperature had fallen to 85 F.
Looking through the HVAC logs to determine the ambient temperature,
Inspector Clouseau discovered that the air conditioner had failed at 4 : 00
pm; the Commissioner’s office was 68 F then. The log’s strip chart shows
Loeb’s office temperature rising at 1 F per hour after the AC failure; at
8 : 00 pm, it was 72 F.
First Steps
1. The office temperature is Tambient = 72 + t where t = 0 is 8 : 00 pm.
4. To find k, use the other data point. Set T (1) = 85 F, then solve for k.
+ 53327946 x16
1672280820 x15 2!10
12
+ 40171771630 x14
756111184500 x13 1!10
12
+ 11310276995381 x12
135585182899530 x11 0 5 10 15 20
+ 1307535010540395 x10
10142299865511450 x9 -1!1012
+ 63030812099294896 x8
311333643161390640 x7 -2!1012
+ 1206647803780373360 x6
3599979517947607200 x5 -3!1012
The Project
1. Describe what happens when trying find the root at x = 20 using
Newton’s method.
2. Describe what happens when trying find the root at x = 20 using
Halley’s method.
3. Discover what happens when the constant term is perturbed. I.e.,
investigate the roots of p(x) = w(x) + 106 .
4. Discover what happens when the x1 term is perturbed. I.e.,
investigate the roots of p(x) = w(x) + x.
xk = an 1 xk 1 an 2 xk 2 ··· a0 xk n k = 1, 2, . . .
x0 = 1, x 1=x 2 = ··· = x n+1 = 0
Then
xn+1
!r
xn
• Bernoulli’s method works best when p has simple roots and r is not ‘close’
to p’s next largest root.
• “If the ratio does not tend to a limit, but oscillates, the root of greatest
modulus is one of a pair of conjugate complex roots.” (Whittaker &
Robinson, The Calculus of Observations, 1924.)
Deflating a Polynomial
Let p(x) be a polynomial. If a root r of p is known, then the deflated
polynomial q(x) is
p(x)
p1 (x) =
x r
The coefficients of p1 are easy to find using synthetic division.
The Technique
1. Use Bernoulli’s method to find r, the largest root of p
2. Deflate p to obtain p1
3. Repeat to find all roots.
Problem: Since there is error in r’s computation, there is error in p1 ’s
coefficients. Error compounds quickly with each iteration.
The Project
1. Expand Wilkinson’s “perfidious polynomial” into standard form
20
W (x) = ’ (x k) = an xn + an 1 xn 1
+ · · · + a0
k=1
to give T (t) = [C(t), S(t)]. These integrals do not have elementary antideriv-
atives, so must be evaluated numerically.
A segment from the spiral T forms a transition
curve which will provide a smooth transition without
a sudden change in lateral acceleration.
Graph C and S to see their respective behaviors.