0% found this document useful (0 votes)
31 views27 pages

0 Preliminaries

This document provides an introduction to numerical analysis and numerical methods. It discusses the differences between analytical and numerical solutions, and how computers are used to evaluate numerical solutions. Examples of numerical methods like bisection are presented, along with concepts like iteration, error analysis, and convergence rates. Sources of error in numerical procedures like rounding errors are explained. Absolute and relative errors are defined, and the IEEE floating point number standard is introduced.

Uploaded by

yvonnetsai0306
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views27 pages

0 Preliminaries

This document provides an introduction to numerical analysis and numerical methods. It discusses the differences between analytical and numerical solutions, and how computers are used to evaluate numerical solutions. Examples of numerical methods like bisection are presented, along with concepts like iteration, error analysis, and convergence rates. Sources of error in numerical procedures like rounding errors are explained. Absolute and relative errors are defined, and the IEEE floating point number standard is introduced.

Uploaded by

yvonnetsai0306
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Chapter 0 Preliminaries

• Introduction
• Analysis vs. numerical analysis
• Computers and numerical analysis
• A Typical Example
• Implementing Bisection
• Computer Arithmetic and Errors
• Interval arithmetic
• Measuring the efficiency of numerical
procedures

Numerical Method, Chap 0 2024 1 Jung Hong Chuang, CS Dept., NYCU


Introduction
• Operations in numerical methods
– The only operations required are +, -, *, /, and
comparisons
• Characteristics of numerical solutions
– Always numerical
– An approximation
• Numerical methods
– The development and study of efficient and
robust procedures for problems with a computer.
• Efficiency
– Computing & Memory
• Accuracy and robustness

Numerical Method, Chap 0 2024 2 Jung Hong Chuang, CS Dept., NYCU


Analytic vs. numerical solution
• Analytical solution vs. numerical solution
– Example: Solving f(x)=0
– Even evaluating an analytic solution is subject
to the errors except exact arithmetic is used
• Numerical solution
– Is always an approximation
• Require error analysis for accuracy concerns
• Need to consider robustness
– Requires computers to evaluate the numerical
solutions
• Require time analysis for efficiency concerns

Numerical Method, Chap 0 2024 3 Jung Hong Chuang, CS Dept., NYCU


Computers and numerical analysis
• Numerical solution
– Algorithms or procedures
• Tools for the numerical solution
– Programs written in Fortran, C, C++, Java,…
– Computer algebra systems
• Mathematica, Maple
– Numerical packages (good for real world
problems)
• MATLAB
• IMSL, LAPACK (linear algebra system),
• LINPACK, EISPPACK

Numerical Method, Chap 0 2024 4 Jung Hong Chuang, CS Dept., NYCU


Computers and numerical analysis
• Computer Algebra System
– Able to perform mathematics symbolically, also
carry out numerical procedures with extreme
precision
– Good for small problems only
– Offer an excellent learning environment
– Very easy to use

Numerical Method, Chap 0 2024 5 Jung Hong Chuang, CS Dept., NYCU


Many numerical solutions are
iterative
Example: Bisection
• Solve a root of f(x)=0 by starting with two
values that enclose the root
– At least one root in the interval if f(x) is
continuous
– Halve the interval and test for sign change
• Numerical solutions
– Iteration
– Error analysis
– Convergence rate

Numerical Method, Chap 0 2024 6 Jung Hong Chuang, CS Dept., NYCU


Errors in numerical procedures
• Error in original data - Due to measurement
– Hard to overcome such errors, but may need to
find how sensitive the results are to change in
the input information (sensitive analysis)
• Human error – ex. Recording error
• Truncation error
– Due to the method itself. Ex., truncated Taylor
series of a function
• Round-off error
– Computers with limited precision
• Propagated error
Numerical Method, Chap 0 2024 7 Jung Hong Chuang, CS Dept., NYCU
Errors in numerical procedures
• Round-off error
– Finite representation in computers
– Numbers are round when stored as floating-
point numbers
• Propagated error
– Errors propagated in the succeeding steps of a
process due to the occurrence of an earlier error
– It is of critical importance
• Well-conditioned vs. ill-conditioned
• Stable vs. Unstable
– Stable: earlier errors die out as the method continues
– Unstable: errors are magnified continuously, eventually
overshadow the true values
Numerical Method, Chap 0 2024 8 Jung Hong Chuang, CS Dept., NYCU
Absolute vs. relative error
• Absolute error
true value - approximate value

– Not good
– 1036.52 +- 0.010
• accurate to 5 significant digits
• adequate precision
– 0.005 +- 0.010
• bad precision

• Relative error
absolute error
| true value |
– More independent of the scale of the value

Numerical Method, Chap 0 2024 9 Jung Hong Chuang, CS Dept., NYCU


Significant digits
• Another term for expressing accuracy
• Significant digits is
– How many digits in the number have meaning
– A formal definition
Let the true value have digits d1d 2d 3.....d n d n 1.....d p
Let the approximate value have d1d 2d 3.....d n en 1.....e p
where d1  0 and d n 1  en 1.

Both values agree to n significan t digits if


d n 1  en 1  5
Otherwise, they agree to n-1 significan t digits.

Numerical Method, Chap 0 2024 10 Jung Hong Chuang, CS Dept., NYCU


Floating-point arithmetic
• Computer stores real numbers as floating-
point numbers, for example
– Normalized mantissa: 0.d1d2….dk x 10^p, d1 =\0
– 13.524 as .13524E2
– -0.0442 as -.442E-1
• IEEE standard – most common way
– A binary computer number has 3 parts  q  2 m

• Sign
• Fraction part (Mantissa)
– Normalized mantissa: 0.b1b2….bk x 2^p, b1 =\0=1. (so no
need to store it.)
– A notable exception is zero: all 0 in exponent and mantissa
• Exponent part
Numerical Method, Chap 0 2024 11 Jung Hong Chuang, CS Dept., NYCU
Floating-point arithmetic
IEEE standard – Extended
 q  2m • Length: 80
• 3 levels of precision Sign: 1, Mantissa: 64
Exponent: 15,
– Single precision Range: 10+-4931
• Length: 32 bits – Biased exponent
Sign: 1,
• Allows negative
Mantissa q: 23(+1)
exponent w/o sign bit
Integer exponent |m|: 8, by adding a bias value
sign of m: 1 bit to actual exponent
largest m: 2  1  127
7
value to make all
Range: 10+-38 exponents range from
127
2  1038 0 to max
– Double
• Length: 64 • For single precision,
bias value is 127,
Sign: 1, Mantissa: 52
max=2^8=256, so -
Exponent: 11, 127 stored as 0, 128
Range: 10+-308 as 255.
Numerical Method, Chap 0 2024 12 Jung Hong Chuang, CS Dept., NYCU
Floating-point arithmetic
IEEE standard

Numerical Method, Chap 0 2024 13 Jung Hong Chuang, CS Dept., NYCU


Floating-point arithmetic
IEEE standard
• Infinite real # vs. finite floating-point #
– Gap between true number and stored number
– Results in round-off error
• A largest number and smallest number
– Ex. single precision
• Smallest: 2.93873E-39, Largest: 3.40282E+38
– Overflow: quantities > maximum, replace the
value with a special bit pattern
– Underflow: quantities < minimum, many replace
the value with zero
– In IEEE standard, zero is stored zeros for sign,
mantissa, exponent. Zero cannot be normalized.
Numerical Method, Chap 0 2024 14 Jung Hong Chuang, CS Dept., NYCU
Floating-point arithmetic
IEEE standard

Numerical Method, Chap 0 2024 15 Jung Hong Chuang, CS Dept., NYCU


Floating-point arithmetic
Examples
• 6 bits (1 for sign, 2 for exponent, 3 for mantissa)
Smallest positive: 0 (1)001 00 9/16*2-1=9/32
Largest positive: 0 (1)111 11 15/16*2^2=15/4

Normalized mantissa:

The smallest cannot have a mantissa of all zeros


because that bit pattern is reserved for the number zero.

X 2^-1 x 2^0 x 2^1 x 2^2


Numerical Method, Chap 0 2024 16 Jung Hong Chuang, CS Dept., NYCU
Floating-point arithmetic
Examples

X 2^-1 x 2^0 x 2^1 x 2^2

• The gap between 0 and the smallest positive is extremely large


because we have normalized.
• There is large gap between each “decade” as well.
• In each decade there are seven values, so there are 4 x 7=28
positive numbers.
• There are 28 negative numbers

Numerical Method, Chap 0 2024 17 Jung Hong Chuang, CS Dept., NYCU


Floating-point arithmetic
Examples
• Test in program
– Don’t do this: If A=B, then …..
– Do this instead: If |A-B|<= TOL, then …..

Numerical Method, Chap 0 2024 18 Jung Hong Chuang, CS Dept., NYCU


Floating-point arithmetic
• Machine epsilon
– Is the smallest machine number eps such that
1+eps is not stored as 1.
• i.e., 1+eps is the first single precision number larger
than 1, i.e., 1+eps is the first computer number > 1.
– Depends on the precision of computer system
• For a computer that stores N-bit (normalized)
mantissas
2 N 1 if chopping is used
eps    N
2 if rounding is used

• For single precision – N=23


– eps=2-23 =1.192E-07 (rounding)
Numerical Method, Chap 0 2024 19 Jung Hong Chuang, CS Dept., NYCU
Floating-point arithmetic

M achine epsilon is the smallest eps such that 1  eps  1.


For single precision, 23 bits mantissa (24 bits normalied mantissa) :
1  (10 0) 2  21
The next lrager machine number is
1  eps  (101) 2  21
The difference between these two number is
eps  (001) 2  21
 2  24  21
 2  23
P.9, Applied NM for Engineers, By Schilling and Harris
Numerical Method, Chap 0 2024 20 Jung Hong Chuang, CS Dept., NYCU
Round-off vs. truncation error
• Round-offer error
– Due to imperfect precision of the computer
– Occurs even when the procedure is exact
• Truncation error
– Caused by a procedure that does not give
precise results even when the arithmetic is
precise
– Ex: evaluate f(x) using truncated Taylor series
• Computational error
– Sum of round-off error and truncation error

Numerical Method, Chap 0 2024 21 Jung Hong Chuang, CS Dept., NYCU


Well-posted and well-conditioned
problems
• Accuracy of a numerical solution depends
on
– Computer accuracy for storing number
– Condition of the problem
– Stability of numerical solution
• Stable: early error are damped out as the computation
proceed, they do not grow without bound
• A problem is well-posed if
– A solution exists and unique, and
– has a solution that varies continuously when
values of its input vary continuously

Numerical Method, Chap 0 2024 22 Jung Hong Chuang, CS Dept., NYCU


Well-posted and well-conditioned
problems
• Condition of a problem
– Well-conditioned
• Not sensitive to input inaccuracy
• The change (error) in the output is not greater than the
change (error) in the input
– Ill-conditioned
• Sensitive to input inaccuracy
• A small change (error) in the input causes a large
change (error) in the output
– Condition number C
relative error in output
C
relative error in input

Numerical Method, Chap 0 2024 23 Jung Hong Chuang, CS Dept., NYCU


Examples
• Compute the value in single precision

• The expression can be reduced to


Z=x^2/x^2 = 1
Numerical Method, Chap 0 2024 24 Jung Hong Chuang, CS Dept., NYCU
Forward and backward error
analysis
• Evaluate y=f(x)
ycalc is the computed value with input x

– Forward error

Efwd=ycalc-yexact

– Backward error
Let xcalc be the x value that gives ycalc with no
computation error

Ebackw=xcalc-x

Numerical Method, Chap 0 2024 25 Jung Hong Chuang, CS Dept., NYCU


Interval arithmetic
• Interval arithmetic allows us to find how parameter
errors are propagated through the sequence of
computer operation of a procedure
• Work on intervals representing a range of numbers
EX: 2.4 +- 0.05  [2.35, 2.45]
• Rules
A  B  [aL , aR ]  [bL , bR ]  [aL  bL , aR  bR ]
A  B  [aL , aR ]  [bL , bR ]  [aL  bR , aR  bL ]
A* B  [min( S ) , max( S )]
where
S  {aL * bL , aL * bR , aR * bL , aR * bR ]

Examples :
[0.5, 0.8]  [-1.2, 0.1]  [-0.7, 0.9]
[0.5, 0.8]  [-1.2, 0.1]  [0.4, 2.0]
[0.5, 0.8]  [-1.2, 0.1]  [-0.96, 0.08]
Numerical Method, Chap 0 2024 26 Jung Hong Chuang, CS Dept., NYCU
Measuring efficiency/error
• How to measure efficiency?
– Based on operation count
– O(n): order of n, O(n^2): order of n^2
• How to measure error?
– Based on size of a parameter, say h. For Example,
• Solve dy/dx=f(x,y) with a value given for y at some value
for x
• Some methods add a weighted sum of estimated values
for the derivative function at evenly spaced x-values that
differ by h
• For one method the Error=(M/6)*h^3, where M depends
on a value for the 3rd derivative of f(x, y)
– O(h^3)

Numerical Method, Chap 0 2024 27 Jung Hong Chuang, CS Dept., NYCU

You might also like