10
Sources of Approximation
I Before computation
I modeling
I empirical measurements
I previous computations
I During computation
I truncation or discretization (mathematical approximations)
I rounding (arithmetic approximations)
I Accuracy of final result reflects all of these
I Uncertainty in input may be amplified by problem
I Perturbations during computation may be amplified by algorithm
From Calculus:
14
Example: Data Error and Computational Error
I Suppose we need a “quick and dirty” approximation to sin(π/8) that
we can compute without a calculator or computer
I Instead of true input x = π/8, we use x̂ = 3/8
I Instead of true function f (x) = sin(x), we use first term of Taylor
series for sin(x), so that fˆ(x) = x
I We obtain approximate result ŷ = 3/8 = 0.3750
I To four digits, true result is y = sin(π/8) = 0.3827
I Computational error:
fˆ(x̂) − f (x̂) = 3/8 − sin(3/8) ≈ 0.3750 − 0.3663 = 0.0087
I Propagated data error:
f (x̂) − f (x) = sin(3/8) − sin(π/8) ≈ 0.3663 − 0.3827 = −0.0164
I Total error: fˆ(x̂) − f (x) ≈ 0.3750 − 0.3827 = −0.0077
15
Truncation Error and Rounding Error
I Truncation error : difference between true result (for actual input)
and result produced by given algorithm using exact arithmetic
I Due to mathematical approximations such as truncating infinite
series, discrete approximation of derivatives or integrals, or
terminating iterative sequence before convergence
I Rounding error : difference between result produced by given
algorithm using exact arithmetic and result produced by same
algorithm using limited precision arithmetic
I Due to inexact representation of real numbers and arithmetic
operations upon them
I Computational error is sum of truncation error and rounding error
I One of these usually dominates
h interactive example i
16
Example: Finite Difference Approximation
I Error in finite difference approximation
f (x + h) − f (x)
f 0 (x) ≈
h
exhibits tradeoff between rounding error and truncation error
I Truncation error bounded by Mh/2, where M bounds |f 00 (t)| for t
near x
I Rounding error bounded by 2/h, where error in function values
bounded by
p
I Total error minimized when h ≈ 2 /M
I Error increases for smaller h because of rounding error and increases
for larger h because of truncation error
17
Example: Finite Difference Approximation
%
!"
"
!"
!%
!"
!$
!"
(/(36+)../.
!#
!"
)../.
!&
!"
!!"
!"
!!%
!"
!!$
!" (.0123(,/1+)../. ./014,15+)../.
!!#
!"
!!&
!"
!!# !!$ !!% !!" !& !# !$ !% "
!" !" !" !" !" !" !" !" !"
'()*+',-)
18
Forward and Backward Error
19
Forward and Backward Error
I Suppose we want to compute y = f (x), where f : R → R, but
obtain approximate value ŷ
I Forward error : Difference between computed result ŷ and true
output y ,
∆y = ŷ − y
I Backward error : Difference between actual input x and input x̂ for
which computed result ŷ is exactly correct (i.e., f (x̂) = ŷ ),
∆x = x̂ − x
20
Example: Forward and Backward Error
√
I As approximation to y = 2, ŷ = 1.4 has absolute forward error
|∆y | = |ŷ − y | = |1.4 − 1.41421 . . . | ≈ 0.0142
or relative forward error of about 1 percent
√
I Since 1.96 = 1.4, absolute backward error is
|∆x| = |x̂ − x| = |1.96 − 2| = 0.04
or relative backward error of 2 percent
I Ratio of relative forward error to relative backward error is so
important we will shortly give it a name
21
Backward Error Analysis
I Idea: approximate solution is exact solution to modified problem
I How much must original problem change to give result actually
obtained?
I How much data error in input would explain all error in computed
result?
I Approximate solution is good if it is exact solution to nearby
problem
I If backward error is smaller than uncertainty in input, then
approximate solution is as accurate as problem warrants
I Backward error analysis is useful because backward error is often
easier to estimate than forward error
Quick review from calculus:
23
Example, continued
I For x = 1,
y = f (1) = cos(1) ≈ 0.5403
ŷ = fˆ(1) = 1 − 12 /2 = 0.5
x̂ = arccos(ŷ ) = arccos(0.5) ≈ 1.0472
I Forward error: ∆y = ŷ − y ≈ 0.5 − 0.5403 = −0.0403
I Backward error: ∆x = x̂ − x ≈ 1.0472 − 1 = 0.0472
24
Conditioning, Stability, and Accuracy
25
Well-Posed Problems
I Mathematical problem is well-posed if solution
I exists
I is unique
I depends continuously on problem data
Otherwise, problem is ill-posed
I Even if problem is well-posed, solution may still be sensitive to
perturbations in input data
I Stablity : Computational algorithm should not make sensitivity worse
26
Sensitivity and Conditioning
I Problem is insensitive, or well-conditioned, if relative change in input
causes similar relative change in solution
I Problem is sensitive, or ill-conditioned, if relative change in solution
can be much larger than that in input data
I Condition number :
|relative change in solution|
cond =
|relative change in input data|
|[f (x̂) − f (x)]/f (x)| |∆y /y |
= =
|(x̂ − x)/x| |∆x/x|
I Problem is sensitive, or ill-conditioned, if cond 1
28
Condition Number
I Condition number is amplification factor relating relative forward
error to relative backward error
relative relative
= cond ×
forward error backward error
I Condition number usually is not known exactly and may vary with
input, so rough estimate or upper bound is used for cond, yielding
relative relative
/ cond ×
forward error backward error
30
Example: Condition Number
√
I Consider f (x) = x
√
I Since f 0 (x) = 1/(2 x ),
0 √
x f (x) x/(2 x ) 1
cond ≈ = √ =
f (x) x 2
I So forward error is about
√ half backward error, consistent with our
previous example with 2
I Similarly, for f (x) = x 2 ,
0
x f (x) x (2x)
cond ≈ = =2
f (x) x 2
which is reciprocal of that for square root, as expected
I Square and square root are both relatively well-conditioned
31
Example: Sensitivity
I Tangent function is sensitive for arguments near π/2
I tan(1.57079) ≈ 1.58058 × 105
I tan(1.57078) ≈ 6.12490 × 104
I Relative change in output is a quarter million times greater than
relative change in input
I For x = 1.57079, cond ≈ 2.48275 × 105
32
Stability
I Algorithm is stable if result produced is relatively insensitive to
perturbations during computation
I Stability of algorithms is analogous to conditioning of problems
I From point of view of backward error analysis, algorithm is stable if
result produced is exact solution to nearby problem
I For stable algorithm, effect of computational error is no worse than
effect of small data error in input
33
Accuracy
I Accuracy : closeness of computed solution to true solution (i.e.,
relative forward error)
I Stability alone does not guarantee accurate results
I Accuracy depends on conditioning of problem as well as stability of
algorithm
I Inaccuracy can result from
I applying stable algorithm to ill-conditioned problem
I applying unstable algorithm to well-conditioned problem
I applying unstable algorithm to ill-conditioned problem (yikes!)
I Applying stable algorithm to well-conditioned problem yields
accurate solution
34
Summary – Error Analysis
I Scientific computing involves various types of approximations that
affect accuracy of results
I Conditioning: Does problem amplify uncertainty in input?
I Stability: Does algorithm amplify computational errors?
I Accuracy of computed result depends on both conditioning of
problem and stability of algorithm
I Stable algorithm applied to well-conditioned problem yields accurate
solition