0% found this document useful (0 votes)
3 views

week2B_sp25

This document covers numerical algorithms for evaluating derivatives, specifically focusing on forward and centered difference methods. It discusses the concepts of truncation and round-off errors, their impact on total error, and introduces condition numbers in the context of numerical stability. Additionally, it outlines the bisection method for finding roots of functions, emphasizing the importance of continuous functions and the Intermediate Value Theorem.

Uploaded by

tobajama
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

week2B_sp25

This document covers numerical algorithms for evaluating derivatives, specifically focusing on forward and centered difference methods. It discusses the concepts of truncation and round-off errors, their impact on total error, and introduces condition numbers in the context of numerical stability. Additionally, it outlines the bisection method for finding roots of functions, emphasizing the importance of continuous functions and the Intermediate Value Theorem.

Uploaded by

tobajama
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Math 164: Week 2

Spring 2025
Numerical algorithms for evaluating derivatives

So far, we have seen how to evaluate a (non-polynomial) function,


using Taylor’s polynomial to approximate the function.
e.g., cos(0.15)
Next, we’d like to evaluate the derivative of a function
d
e.g ., cos(x)|x=0.15 = cos′ (0.15)
dx

To find the derivative, we will recast the Taylor’s theorem in a


different form.
1. Forward difference algorithm for f ′ (x)

We use Taylor’s Theorem to find an approximation of f ′ (x) at


x = x0 , namely the derivative, f ′ (x0 ).
′′ (x ) (n) (x ) (n+1)
f (x0 +h) = f (x0 )+f ′ (x0 )h + f 2!
0
h2 +...+ f n!
0
hn + f (n+1)!
(s) n+1
h

Keep only the 0th , 1st order terms in h:

f (x0 + h) = f (x0 ) + f ′ (x0 )h + O(h2 )

Rewrite it in the form


f (x0 +h)−f (x0 )
f ′ (x0 ) = h + O(h)

Here, h is a parameter that needs to be determined.


We derive two quantities from the above expression.
• Forward difference (numerical algorithm):

′ f (x0 + h) − f (x0 )
ffwd (x0 ) =
h
′ (x ) provides an approximate value of
Forward difference ffwd 0
f ′ (x0 ).
• Truncation error:

|f ′ (x0 ) − ffwd

(x0 )| = O(h)

The truncation error estimates the size of error committed when


mathematically approximating the true derivative f ′ (x0 ) with the
forward difference.
It is of order O(h), i.e., the error increases (decreases) proportional
to h, as h increases (decreases).
Example: find the numerical value of sin′ (1)

Let’s evaluate the derivative sin′ (x0 ) at x0 = 1.


Here f (x) = sin(x) and the forward difference of f (x) at x = x0 is

sin(x0 + h) − sin(x0 )
sin′fwd (x0 ) =
h
h is a parameter we need to determine.
Python example of forward difference
Round-off error can become larger than truncation error

Total error is given by



total error = |ffwd (x) − f ′ (x)| (1)

It includes two types of errors: truncation error and round-off error.


• Truncation error: mathematical error due to approximating
f ′ (x) with ffwd
′ (x)

′ (x) need to be
• Round-off error: numerical error since ffwd
evaluated numerically.
The smallest round-off error of a double precision number is

ϵ = 2−53 ≈ 10−16 (2)


In the above, we only considered the truncation error. Now, let’s
consider both the truncation and the round-off errors.
Include the round-off error in the forward difference algorithm:

f (x + h) − f (x) + 2ϵ
total error = − f ′ (x) (3)
h

f (x + h) − f (x) 2ϵ
= − f ′ (x) + (4)
| h {z h
} |{z}
trunc err r .o. err

f ′′ (s) 2ϵ
≤ h + (5)
| 2{z } h
|{z}
trunc err r .o. err
h 2ϵ
≤ + (6)
2
|{z} h
|{z}
trunc err r .o. err
Truncation error decreases as h decreases, but round-off error
increases as h decreases.
h 2ϵ
total error = + (7)
2 h

Its minimum value is attained at


d 1 2ϵ
total error = − 2 = 0 (8)
dh 2 h
1/2
h = (4ϵ)1/2 ≈ 4 · 10−16 ≈ 2 · 10−8 . (9)
Python demonstration of total error
2. Centered difference algorithm for f ′ (x)
Can we improve the accuracy of numerical algorithms for f ′ (x)?
▶ Decreasing h to small values has limitations because of the
round-off error.
▶ To avoid round-off error, we must switch to higher-precision
floating system.
▶ Alternative is to find algorithms that approximate f ′ (x) more
accurately.
Manipulate Taylor’s polynomial again, but in a different way
1 1 ′′′
f (x0 + h) = f (x0 + h) + f ′ (x0 )h + f ′′ (x0 )h2 + f (s)h3 (10)
2 6
′ 1 ′′ 1 ′′′
f (x0 − h) = f (x0 + h) − f (x0 )h + f (x0 )h2 − f (t)h3 (11)
2 6
Subtract two equations, then

f (x0 + h) − f (x0 − h) = 2f ′ (x0 )h + Ch3 (12)


Then we obtain the centered difference algorithm:

f (x0 + h) − f (x0 − h)
f ′ (x0 ) = + O(h2 ) (13)
2h
which has numerical accuracy of O(h2 ).
Recall that forward difference
f (x0 + h) − f (x0 )
f ′ (x0 ) = + O(h) (14)
h
has numerical precision of O(h).
• This means that truncation error goes to zero as h2 . If we
decrease h by a factor 0.1, then the truncation error decreases by a
factor of 0.01 in the centered difference.
• In contrast, the forward difference decreases by a factor of 0.1.
Python example of centered difference
Round-off error of centered difference
We can follow the same analysis performed on the forward
difference to estimate the total error of centered difference.
The only change is that the truncation is error is O(h2 ) (not
O(h)). Then the total error is

total error = Dh2 + (15)
h
Its minimum value is attained at
d 2ϵ
total error = 2Dh − 2 = 0
dh h
 ϵ 1/3  10−16 1/3
h= ≈ ≈ 10−5 .
D 0.2

The critical value h ≈ 10−5 is smaller than what we obtained in


the forward difference (h ≈ 10−8 ). This is because the truncation
error of centered difference decreases faster, so the round-off error
dominates the truncation error at a smaller value of h.
2.3 Condition number
Let’s step back and consider different types of errors that emerge
when finding numerical algorithms to a mathematical problem.
A general mathematical problem can be described as:

given input data x, find the output y defined by y = F (x).

Examples of mathematical problems are:


▶ F (x) = f (x) (evaluate the value of a function)
▶ F (x) = f ′ (x) (evaluate the derivative of a function)

Then, we looked for numerical algorithms F̃ (x) that approximates


the output y :
y ≈ ỹ = F̃ (x)
Examples of numerical algorithms:
▶ F̃ (x) = Tn (x) (Taylor polynomial)
′ (x) (forward difference)
▶ F̃ (x) = ffwd
Types of errors
input math problem numerical algo
x F (x) F̃ (x) numerical
x + δx F (x + δx) F̃ (x + δx) error
condition number stability
• So far we considered the numerical error, i.e., how good is the
numerical approximation:

numerical error (trunc. + round off ) = |F (x) − F̃ (x)|.

• Condition number is about how sensitive the mathematical


problem is to errors in the input

|F (x) − F (x + δx)|
absolute condition number = lim
δx→0 |δx|
|F (x) − F (x + δx)|/|F (x)|
relative condition number = lim
δx→0 |δx|/|x|
Example: condition number of evaluating a function

The mathematical problem is to evaluate a function f (x). So,


F (x) = f (x).

|f (x) − f (x + δx)|
abs cond num = lim = |f ′ (x)|
δx→0 |δx|
|f (x) − f (x + δx)|/|f (x)|
rel cond num = lim = |xf ′ (x)/f (x)|
δx→0 |δx|/|x|

• Well-conditioned problems have small absolute or relative


condition number (< 1).
Chapter 3. Root-finding methods
3.1 Roots of a function
Given a function f (x), a number r that satisfies
f (r ) = 0
is called a root of f (x).
• Example of finding a root:
Find a steady state solution of a differential equation describing
the activity of a population of neurons:
dx
= −x + ϕ(x + I )
dt
ϕ(x) = 1/(e x + 1)
We want to find the steady state solution, i.e., x that satisfies
dx/dt = 0 or 0 = −x + ϕ(x + I ).
Then, the problem boils down to finding the root of
f (x) = −x + ϕ(x + I ).
Condition number of finding a root
How sensitively does r (i.e., a root of f (x)) change in response to
a change in the function f (x)?
To answer this question, let’s formulate the problem of finding a
root as a map F from a function f to its root r :
F :f →r where f (r ) = 0.
Next, consider a perturbation of f (x):
f˜(x) = f (x) + ϵh(x).
Then, find a root r˜ of f˜(x):
F : f˜ → r˜ where f˜(˜
r ) = 0.

How much does the root r˜ of f˜(x) deviate from the root r of f (x)?
In other words, how large is δr in
r˜ = r + δr ?
Estimate δr by finding an approximation of its value.

0 = f˜(˜
r ) = f (r + δr ) + ϵh(r + δr )
f ′′ (r )
= f (r ) + f ′ (r )δr + (δr )2 + ...
2
+ ϵ[h(r ) + h′ (r )δr + ...]
≈ 0 + f ′ (r )δr + ϵh(r )

Then, the first order approximation of δr is:

ϵh(r )
δr = − .
f ′ (r )

The absolute condition number of F (i.e., the problem of finding a


root) is:

r˜ − r δr 1
Ka (r ) = = = ′ .
˜
f −f ϵh(r ) |f (r )|
Example: condition number of finding a root
Consider a function

f (x) = e kx − e k with k>0


f ′ (x) = ke kx

The root of f is

f (r ) = e kr − e k = 0 → r = 1.

Let’s show that the condition number of finding a root of f (x)


depends on the parameter k.
The absolute condition number is
1 1
Ka (r ) = = k.
|f ′ (r )| ke

Here, ke k increases monotonically as k increases, so Ka (r )


decreases monotonically with k. This means that for a large k the
root r is not sensitive to changes in f , but for a small k it is.
Python example: condition number of root finding
3.2 Bisection method
Theorem (Intermediate Value Theorem)
If f(x) is continuous on [a,b], then, between a and b, f(x) assumes
all possible values between f(a) and f(b).
In particular, if f(a) and f(b) have opposite signs, then there is at
least one root of f(x) between a and b.
Algorithm:
1. Choose a and b, such that f (a)f (b) < 0, i.e., f (a) and f (b)
have opposite signs.
▶ By the IVT, f (x) has a root r ∈ [a, b].
▶ The worst case error is |a − b|.
▶ Improve the error by dividing the interval [a, b] by half.
2. Define the mid-point: m = (a + b)/2.
f (a), f (m) opposite signs → b = m then update [a, b = m]
f (a), f (m) same signs → a = m then update [a = m, b]
The worst case error is |a − b|/2.

3. Repeat Step 2 until error tolerance is met.


At the end of Step 2, we are always left with an interval that
contains a root of f (x).
The length of the interval is |a − b|/2k , if repeated k times.

error ≤ |a − b|/2k
Python example of Bisection Method

You might also like