Week 1 Errors-and-Approximation
Week 1 Errors-and-Approximation
Numerical Methods
and Analysis
Engr. Ranzel Dimaculangan, ECT
01
Numerical Methods
and Analysis
Numerical Analysis
• Definition: Numerical analysis is the study of algorithms that use numerical
approximation (as opposed to symbolic or analytical computation) for the
problems of mathematical analysis.
• Focus: It is concerned with the accuracy, stability, and efficiency of these
algorithms. Numerical analysis seeks to understand the errors involved in
approximations and how they propagate through the computations.
• Purpose: The goal is to provide a theoretical foundation for why and how
these methods work, including convergence, error bounds, and robustness.
Numerical analysis often deals with the development of new algorithms that
are more accurate or efficient.
• Example Topics: Error analysis, stability of algorithms, convergence of
sequences and series, numerical differentiation and integration, and
eigenvalue problems.
Numerical Methods
• Definition: Numerical methods are the techniques or algorithms used to
obtain numerical solutions to mathematical problems that may not have exact
analytical solutions.
• Focus: It is more applied, focusing on the implementation of specific
techniques to solve practical problems. Numerical methods involve the actual
procedures or algorithms to approximate the solution of mathematical
problems.
• Purpose: The goal is to develop and apply techniques that can be used in
practice to solve problems in science, engineering, and other fields. Numerical
methods emphasize the implementation and computational aspects.
• Example Topics: Methods for solving linear and nonlinear equations (e.g.,
Newton's method), numerical integration (e.g., Simpson's rule), finite
difference methods for differential equations, and Monte Carlo methods.
Analytical vs Numerical Solutions
Analytical Solutions
These are exact, derived using mathematical methods, and are typically
preferred when the problem allows it. They provide deep insights but
are limited in scope.
Numerical Solutions
These are approximate, obtained through computational methods, and
are necessary for complex or real-world problems where analytical
solutions are not feasible. They offer flexibility but at the cost of
potential numerical errors and the need for computational resources.
Analytical vs Numerical Solutions
Analytical Numerical
Methodology Involves deriving an exact solution using Involves approximating the solution by
mathematical techniques discretizing the problem and solving it iteratively.
Applicability Suitable for simpler or well-defined problems Applicable to a wide range of problems,
where the equation can be manipulated into a including those that are too complex or
solvable form. impossible to solve analytically.
Accuracy Provides an exact solution, assuming no Provides an approximate solution, with the
algebraic errors are made accuracy depending on the numerical method
and step size used
Complexity Can be complex and challenging to derive, The complexity lies more in the implementation
especially for non-trivial equations or systems rather than in the derivation.
Typical Use Commonly used in simpler physics problems, Common in real-world applications where the
Cases basic differential equations in engineering, system is too complex for an exact solution,
and theoretical work where understanding the such as fluid dynamics, weather modeling,
exact relationship between variables is structural analysis, and large-scale simulations
essential.
Example
Solve the quadratic equation x2 - 3x + 2 = 0
using analytical and numerical computation.
Using Analytical Computation
Solve the quadratic equation x2 - 3x + 2 = 0 using analytical and
numerical computation.
−𝑏 ± 𝑏 ! − 4𝑎𝑐
𝑥=
2𝑎
Solution: x1 = 2, x2 = 1
Using Numerical Computation
Solve the quadratic equation x2 - 3x + 2 = 0 using analytical and
numerical computation.
𝑓(𝑥" )
𝑥"#$ = 𝑥" −
𝑓′(𝑥" )
• Final Term
1. Interpolating Polynomial
2. Numerical Integration and Differentiation
3. Numerical Solutions of Ordinary Differential Equations
02
Errors and
Approximations
You can enter a subtitle here if you need it
Review: Polynomial Functions
• Definition: A polynomial function is a function that can be expressed in the
form P(x) = 𝑎𝑛𝑥𝑛 + 𝑎!"# 𝑥 !"# + ⋯ + 𝑎1𝑥 + 𝑎0 where an,an−1,…,a0 are constants,
and n is a non-negative integer.
• Examples:
• Properties:
• Polynomials are continuous and differentiable everywhere.
• The degree of the polynomial is the highest power of x that appears in
the function.
• The roots (solutions where P(x)=0) can be real or complex, and the
number of roots is equal to the degree of the polynomial.
Review: Transcendental Functions
• Definition: Transcendental functions are functions that cannot be expressed as a finite
combination of algebraic operations (addition, subtraction, multiplication, division, and
taking roots) on the variable xxx. These functions go beyond polynomials.
• Examples:
• Properties:
• Transcendental functions often exhibit more complex behavior, including
periodicity (like trigonometric functions) or rapid growth (like exponential
functions).
• They are also continuous and differentiable, but their derivatives may involve other
transcendental functions.
• They cannot generally be solved in terms of elementary algebraic operations.
Accuracy vs Precision
• Although the numerical technique yielded estimates that were close
to the exact analytical solution, there was a discrepancy, or error,
because the numerical method involved an approximation.
• The errors associated with both calculations and measurements can
be characterized with regards to two concepts:
1. Absolute Error. This gives a direct measure of the deviation from the true value.
𝐴𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝐸𝑟𝑟𝑜𝑟 = | 𝑇𝑟𝑢𝑒 𝑉𝑎𝑙𝑢𝑒 − 𝐶𝑜𝑚𝑝𝑢𝑡𝑒𝑑 𝑉𝑎𝑙𝑢𝑒 |
2. Relative Error. This normalizes the error by the true value, which is useful when
the true value has a large magnitude.
| 𝑇𝑟𝑢𝑒 𝑉𝑎𝑙𝑢𝑒 − 𝐶𝑜𝑚𝑝𝑢𝑡𝑒𝑑 𝑉𝑎𝑙𝑢𝑒 |
𝑅𝑒𝑙𝑎𝑡𝑖𝑣𝑒 𝐸𝑟𝑟𝑜𝑟 =
|𝑇𝑟𝑢𝑒 𝑉𝑎𝑙𝑢𝑒 |
!
1
𝑆𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝐷𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛 = D(𝑥𝑖 − 𝜇 )&
𝑛
$%#
2. Variance. It is the expectation of the squared deviation of each number from the
mean of a dataset.
!
1
𝑉𝑎𝑟𝑖𝑎𝑛𝑐𝑒 = D(𝑥𝑖 − x̄)&
𝑛−1
$%#
3. Range. It is a simple measure of variability or spread in a dataset.
Computed values: 1st iteration = 3.14, 2nd iteration = 3.142, 3rd iteration = 3.1416
Assuming the true value is π ≈ 3.14159
Accuracy:
• Compute the absolute error for each value.
• Compute the relative error by dividing the absolute error by π.
Precision:
• Compute the standard deviation or variance of the computed values to assess the
consistency.
• Compute the range of the computed values to get a sense of their spread.
Example
Consider you are using a numerical method to approximate the value of π, and after multiple
iterations, you obtain the following results:
Computed values: 1st iteration = 3.14, 2nd iteration = 3.142, 3rd iteration = 3.1416
Assuming the true value is π ≈ 3.14159
Accuracy:
• Absolute error = 0.00159, 0.00041, 0.00001
• Relative error = 5.061131 x 10-4, 1.305072 x 10-4, 3.183102 x 10-6
• Percentage error = 5.061131 x 10-2 %, 1.305072 x 10-2 %, 3.183102 x 10-4 %
Precision:
• Standard deviation = 0.000864
• Variance = 7.466667 x 10-7
• Range = 3.1420 – 3.14 = 0.002
Error Definitions
• Numerical errors arise from the use of approximations to
represent exact mathematical operations and quantities.
• Two sources of error in numerical calculations:
1. Round-off errors which result when numbers having
limited significant figures are used to represent exact
numbers.
2. Truncation errors which result when approximations
are used to represent exact mathematical procedures.
Round-off Errors
• Round-off errors originate from the fact that computers retain only a
fixed number of significant figures during a calculation.
• Numbers such as π, e, or square root of 7 cannot be expressed by a
fixed number of significant figures.
• Therefore, they cannot be represented exactly by the computer. In
addition, because computers use a base-2 representation, they
cannot precisely represent certain exact base-10 numbers. The
discrepancy introduced by this omission of significant figures is
called round-off error.
Round-off Errors
• Numerical round-off errors are directly related to the manner in which
numbers are stored in a computer.
• A number system is merely a convention for representing quantities.
Examples are decimal (base-10), octal (base-8), and binary (base-2).
• Examples:
• 17310 (base-10) = 2558 (base-8) = 101011012 (base-2)
• -28810 (base-10) = −4418 (base-8) = 1001000012 (base-2)
• Positional notation is a system for representing numbers where the value of a
digit depends on its position within the number. The sum of products always
give the decimal equivalent of the number from any numeral system.
Examples:
• (8 × 104) + (6 × 103) + (4 × 102) + (0 × 101) + (9 × 100) = 86,40910
• 112 = (1 × 21) + (1 × 20) = 310
Round-off Errors
• One way how numbers are represented on a computer is using the signed
magnitude method. It employs the first bit of a word to indicate the sign, with
a 0 for positive and a 1 for negative.
Round-off Errors
Determine the range of integers in base-10 that can be represented on a 16-bit
computer.
• A 16-bit computer word can store decimal integers ranging from −32,767 to
32,767. In addition, because zero is already defined as 0000000000000000, it is
redundant to use the number 1000000000000000 to define a “minus zero.”
Therefore, it is usually employed to represent an additional negative number:
−32,768, and the range is from −32,768 to 32,767.
• Exceeding the range will not be represented/solved by the computer and thus
may or may not throw an error.
Round-off Errors
Fractional quantities are typically represented in computers using floating-point
form. In this approach, the number is expressed as a fractional part, called a
mantissa or significand, and an integer part, called an exponent or characteristic,
as in
𝑚 " 𝑏!
where
m = the mantissa,
b = the base of the number system being used, and
e = the exponent.
For instance, the number 156.78 could be represented as 0.15678 × 103 in a floating-
point base-10 system.
Floating-point Form
• Modern computers use floating-point arithmetic based on base 2,
not base 10. The IEEE 754 standard, which is widely used for floating-
point computations, defines floating-point representations in binary
(base 2).
• Converting a decimal fraction into binary floating-point form:
1. Convert the fraction to binary
2. Normalize the binary number
3. Determine the floating-point representation
Floating-point Form (Example #1)
Convert ½ into binary floating-point form.
So, combining these, the IEEE 754 single-precision binary floating-point representation of ½ is:
0 01111110 00000000000000000000000
Floating-point Form (Example #1)
Calculate the round off error after converting ½ into binary floating-point form
Convert Whole Number Part (5): Convert Fractional Number Part (0.625):
Divide 5 by 2: Divide 5 by 2:
5 ÷ 2 = 2 remainder 1 0.625 x 2 = 1.25, integer part = 1
2 ÷ 2 = 1 remainder 0 0.25 x 2 = 0.5, integer part = 0
1 ÷ 2 = 0 remainder 1 0.5 x 2 = 1.0, integer part = 1
Binary for 5: 101 Binary for 0.625 = 0.101
If you truncate this series after the first two terms to approximate ex as 1 + x,
the difference between the true value and this approximation is the
truncation error.