0% found this document useful (0 votes)
22 views

Week 1 Errors-and-Approximation

numerical methods
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Week 1 Errors-and-Approximation

numerical methods
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Electrical Engineering

Numerical Methods
and Analysis
Engr. Ranzel Dimaculangan, ECT
01
Numerical Methods
and Analysis
Numerical Analysis
• Definition: Numerical analysis is the study of algorithms that use numerical
approximation (as opposed to symbolic or analytical computation) for the
problems of mathematical analysis.
• Focus: It is concerned with the accuracy, stability, and efficiency of these
algorithms. Numerical analysis seeks to understand the errors involved in
approximations and how they propagate through the computations.
• Purpose: The goal is to provide a theoretical foundation for why and how
these methods work, including convergence, error bounds, and robustness.
Numerical analysis often deals with the development of new algorithms that
are more accurate or efficient.
• Example Topics: Error analysis, stability of algorithms, convergence of
sequences and series, numerical differentiation and integration, and
eigenvalue problems.
Numerical Methods
• Definition: Numerical methods are the techniques or algorithms used to
obtain numerical solutions to mathematical problems that may not have exact
analytical solutions.
• Focus: It is more applied, focusing on the implementation of specific
techniques to solve practical problems. Numerical methods involve the actual
procedures or algorithms to approximate the solution of mathematical
problems.
• Purpose: The goal is to develop and apply techniques that can be used in
practice to solve problems in science, engineering, and other fields. Numerical
methods emphasize the implementation and computational aspects.
• Example Topics: Methods for solving linear and nonlinear equations (e.g.,
Newton's method), numerical integration (e.g., Simpson's rule), finite
difference methods for differential equations, and Monte Carlo methods.
Analytical vs Numerical Solutions
Analytical Solutions
These are exact, derived using mathematical methods, and are typically
preferred when the problem allows it. They provide deep insights but
are limited in scope.

Numerical Solutions
These are approximate, obtained through computational methods, and
are necessary for complex or real-world problems where analytical
solutions are not feasible. They offer flexibility but at the cost of
potential numerical errors and the need for computational resources.
Analytical vs Numerical Solutions
Analytical Numerical
Methodology Involves deriving an exact solution using Involves approximating the solution by
mathematical techniques discretizing the problem and solving it iteratively.

Applicability Suitable for simpler or well-defined problems Applicable to a wide range of problems,
where the equation can be manipulated into a including those that are too complex or
solvable form. impossible to solve analytically.
Accuracy Provides an exact solution, assuming no Provides an approximate solution, with the
algebraic errors are made accuracy depending on the numerical method
and step size used
Complexity Can be complex and challenging to derive, The complexity lies more in the implementation
especially for non-trivial equations or systems rather than in the derivation.

Typical Use Commonly used in simpler physics problems, Common in real-world applications where the
Cases basic differential equations in engineering, system is too complex for an exact solution,
and theoretical work where understanding the such as fluid dynamics, weather modeling,
exact relationship between variables is structural analysis, and large-scale simulations
essential.
Example
Solve the quadratic equation x2 - 3x + 2 = 0
using analytical and numerical computation.
Using Analytical Computation
Solve the quadratic equation x2 - 3x + 2 = 0 using analytical and
numerical computation.

Method: Using the quadratic formula

−𝑏 ± 𝑏 ! − 4𝑎𝑐
𝑥=
2𝑎

Solution: x1 = 2, x2 = 1
Using Numerical Computation
Solve the quadratic equation x2 - 3x + 2 = 0 using analytical and
numerical computation.

Method: Using the Newton-Raphson formula

𝑓(𝑥" )
𝑥"#$ = 𝑥" −
𝑓′(𝑥" )

Solution: Repeating this iteration would eventually converge to one of


the solutions, either x1 = 2 or x2 = 1
Course Outline
• Midterm
1. Error and Approximation
2. Solutions of Non-linear Equations
3. Systems of Linear Equations

• Final Term
1. Interpolating Polynomial
2. Numerical Integration and Differentiation
3. Numerical Solutions of Ordinary Differential Equations
02
Errors and
Approximations
You can enter a subtitle here if you need it
Review: Polynomial Functions
• Definition: A polynomial function is a function that can be expressed in the
form P(x) = 𝑎𝑛𝑥𝑛 + 𝑎!"# 𝑥 !"# + ⋯ + 𝑎1𝑥 + 𝑎0 where an,an−1,…,a0 are constants,
and n is a non-negative integer.
• Examples:

• Properties:
• Polynomials are continuous and differentiable everywhere.
• The degree of the polynomial is the highest power of x that appears in
the function.
• The roots (solutions where P(x)=0) can be real or complex, and the
number of roots is equal to the degree of the polynomial.
Review: Transcendental Functions
• Definition: Transcendental functions are functions that cannot be expressed as a finite
combination of algebraic operations (addition, subtraction, multiplication, division, and
taking roots) on the variable xxx. These functions go beyond polynomials.
• Examples:

• Properties:
• Transcendental functions often exhibit more complex behavior, including
periodicity (like trigonometric functions) or rapid growth (like exponential
functions).
• They are also continuous and differentiable, but their derivatives may involve other
transcendental functions.
• They cannot generally be solved in terms of elementary algebraic operations.
Accuracy vs Precision
• Although the numerical technique yielded estimates that were close
to the exact analytical solution, there was a discrepancy, or error,
because the numerical method involved an approximation.
• The errors associated with both calculations and measurements can
be characterized with regards to two concepts:

• Accuracy refers to how closely a computed or measured value


agrees with the true value.
• Precision refers to how closely individual computed or measured
values agree with each other
Inaccuracy vs Imprecision
• Inaccuracy (also called bias) is defined as systematic
deviation from the truth.
• Imprecision (also called uncertainty) refers to the magnitude
of the scatter.
• Numerical methods should be sufficiently accurate or
unbiased to meet the requirements of a particular
engineering problem.
Accuracy vs Precision
How do we quantify
accuracy and precision?
Quantifying Accuracy
In numerical methods, accuracy is often measured using the following metrics:

1. Absolute Error. This gives a direct measure of the deviation from the true value.
𝐴𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝐸𝑟𝑟𝑜𝑟 = | 𝑇𝑟𝑢𝑒 𝑉𝑎𝑙𝑢𝑒 − 𝐶𝑜𝑚𝑝𝑢𝑡𝑒𝑑 𝑉𝑎𝑙𝑢𝑒 |

2. Relative Error. This normalizes the error by the true value, which is useful when
the true value has a large magnitude.
| 𝑇𝑟𝑢𝑒 𝑉𝑎𝑙𝑢𝑒 − 𝐶𝑜𝑚𝑝𝑢𝑡𝑒𝑑 𝑉𝑎𝑙𝑢𝑒 |
𝑅𝑒𝑙𝑎𝑡𝑖𝑣𝑒 𝐸𝑟𝑟𝑜𝑟 =
|𝑇𝑟𝑢𝑒 𝑉𝑎𝑙𝑢𝑒 |

3. Percentage Error. This provides the relative error as a percentage, making it


easier to interpret.
𝑃𝑒𝑟𝑐𝑒𝑛𝑡𝑎𝑔𝑒 𝐸𝑟𝑟𝑜𝑟 = 𝑅𝑒𝑙𝑎𝑡𝑖𝑣𝑒 𝐸𝑟𝑟𝑜𝑟 𝑥 100%
Quantifying Precision
1. Standard Deviation. It is a measure of the amount of variation or dispersion in a
set of values. A smaller standard deviation indicates higher precision.

!
1
𝑆𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝐷𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛 = D(𝑥𝑖 − 𝜇 )&
𝑛
$%#

2. Variance. It is the expectation of the squared deviation of each number from the
mean of a dataset.
!
1
𝑉𝑎𝑟𝑖𝑎𝑛𝑐𝑒 = D(𝑥𝑖 − x̄)&
𝑛−1
$%#
3. Range. It is a simple measure of variability or spread in a dataset.

𝑅𝑎𝑛𝑔𝑒 = 𝑀𝑎𝑥𝑖𝑚𝑢𝑚 𝑉𝑎𝑙𝑢𝑒 − 𝑀𝑖𝑛𝑖𝑚𝑢𝑚 𝑉𝑎𝑙𝑢𝑒


Example
Consider you are using a numerical method to approximate the value of π, and after multiple
iterations, you obtain the following results:

Computed values: 1st iteration = 3.14, 2nd iteration = 3.142, 3rd iteration = 3.1416
Assuming the true value is π ≈ 3.14159

Accuracy:
• Compute the absolute error for each value.
• Compute the relative error by dividing the absolute error by π.

Precision:
• Compute the standard deviation or variance of the computed values to assess the
consistency.
• Compute the range of the computed values to get a sense of their spread.
Example
Consider you are using a numerical method to approximate the value of π, and after multiple
iterations, you obtain the following results:

Computed values: 1st iteration = 3.14, 2nd iteration = 3.142, 3rd iteration = 3.1416
Assuming the true value is π ≈ 3.14159

Accuracy:
• Absolute error = 0.00159, 0.00041, 0.00001
• Relative error = 5.061131 x 10-4, 1.305072 x 10-4, 3.183102 x 10-6
• Percentage error = 5.061131 x 10-2 %, 1.305072 x 10-2 %, 3.183102 x 10-4 %

Precision:
• Standard deviation = 0.000864
• Variance = 7.466667 x 10-7
• Range = 3.1420 – 3.14 = 0.002
Error Definitions
• Numerical errors arise from the use of approximations to
represent exact mathematical operations and quantities.
• Two sources of error in numerical calculations:
1. Round-off errors which result when numbers having
limited significant figures are used to represent exact
numbers.
2. Truncation errors which result when approximations
are used to represent exact mathematical procedures.
Round-off Errors
• Round-off errors originate from the fact that computers retain only a
fixed number of significant figures during a calculation.
• Numbers such as π, e, or square root of 7 cannot be expressed by a
fixed number of significant figures.
• Therefore, they cannot be represented exactly by the computer. In
addition, because computers use a base-2 representation, they
cannot precisely represent certain exact base-10 numbers. The
discrepancy introduced by this omission of significant figures is
called round-off error.
Round-off Errors
• Numerical round-off errors are directly related to the manner in which
numbers are stored in a computer.
• A number system is merely a convention for representing quantities.
Examples are decimal (base-10), octal (base-8), and binary (base-2).
• Examples:
• 17310 (base-10) = 2558 (base-8) = 101011012 (base-2)
• -28810 (base-10) = −4418 (base-8) = 1001000012 (base-2)
• Positional notation is a system for representing numbers where the value of a
digit depends on its position within the number. The sum of products always
give the decimal equivalent of the number from any numeral system.
Examples:
• (8 × 104) + (6 × 103) + (4 × 102) + (0 × 101) + (9 × 100) = 86,40910
• 112 = (1 × 21) + (1 × 20) = 310
Round-off Errors
• One way how numbers are represented on a computer is using the signed
magnitude method. It employs the first bit of a word to indicate the sign, with
a 0 for positive and a 1 for negative.
Round-off Errors
Determine the range of integers in base-10 that can be represented on a 16-bit
computer.

• A 16-bit computer word can store decimal integers ranging from −32,767 to
32,767. In addition, because zero is already defined as 0000000000000000, it is
redundant to use the number 1000000000000000 to define a “minus zero.”
Therefore, it is usually employed to represent an additional negative number:
−32,768, and the range is from −32,768 to 32,767.
• Exceeding the range will not be represented/solved by the computer and thus
may or may not throw an error.
Round-off Errors
Fractional quantities are typically represented in computers using floating-point
form. In this approach, the number is expressed as a fractional part, called a
mantissa or significand, and an integer part, called an exponent or characteristic,
as in
𝑚 " 𝑏!
where
m = the mantissa,
b = the base of the number system being used, and
e = the exponent.

For instance, the number 156.78 could be represented as 0.15678 × 103 in a floating-
point base-10 system.
Floating-point Form
• Modern computers use floating-point arithmetic based on base 2,
not base 10. The IEEE 754 standard, which is widely used for floating-
point computations, defines floating-point representations in binary
(base 2).
• Converting a decimal fraction into binary floating-point form:
1. Convert the fraction to binary
2. Normalize the binary number
3. Determine the floating-point representation
Floating-point Form (Example #1)
Convert ½ into binary floating-point form.

1. Convert the Fraction to Binary: ½ in binary is 0.1.


2. Normalize the Binary Number: In normalized form, the
binary number 0.1 becomes 1.0 × 2-1. In binary floating-
point representation, normalization means adjusting
the number so that there is only one non-zero digit to
the left of the binary point
Floating-point Form (Example #1)
3. Determine the Floating-Point Representation:

For IEEE 754 single-precision (32-bit) floating-point format:


• Sign bit (1 bit where 0 = positive and 1 = negative): 0
• Exponent (8 bits): The exponent is stored with a bias of 127 for single precision. Add 127 to
the actual exponent to get the stored exponent value then convert to binary. For 2−1, the
exponent is -1. Adding the bias, −1 + 127 = 126. In binary, 126 is 01111110.
• Mantissa or significand (23 bits): The mantissa is stored without the leading 1 (which is
implied in normalized numbers). The normalized mantissa for 1.0 is thus
00000000000000000000000 (23 bits of zeros).

So, combining these, the IEEE 754 single-precision binary floating-point representation of ½ is:
0 01111110 00000000000000000000000
Floating-point Form (Example #1)
Calculate the round off error after converting ½ into binary floating-point form

1. Convert 0 01111110 00000000000000000000000 to decimal


2. Identify the sign bit which is 0 = positive number.
3. Identify the exponent which is 01111110 then convert to decimal. 011111102 = 12610.
Subtract the bias of 127 for single precision where 126 – 127 = -1; thus, the exponent
is -1.
4. Identify the mantissa which is 00000000000000000000000. It is implied that there
is a leading 1; thus the mantissa is 1.00000000000000000000000.
5. Thus, 1.0 x 2-1 = 0.12. Then convert it to decimal which is equal to 0.5
Review: Converting Decimal Fractions to Binary
1. Convert the whole number part.
• Divide the whole number part by 2 and record the remainder
• Repeat and continue until the answer becomes zero
2. Convert the fractional part.
• Multiply the fractional part by 2 and record the integer part (0 or 1) of the result.
• Repeat and continue until the fractional part becomes zero or repeats.

Example: Convert 5.625 in binary

Convert Whole Number Part (5): Convert Fractional Number Part (0.625):
Divide 5 by 2: Divide 5 by 2:
5 ÷ 2 = 2 remainder 1 0.625 x 2 = 1.25, integer part = 1
2 ÷ 2 = 1 remainder 0 0.25 x 2 = 0.5, integer part = 0
1 ÷ 2 = 0 remainder 1 0.5 x 2 = 1.0, integer part = 1
Binary for 5: 101 Binary for 0.625 = 0.101

Thus, 5.62510 = 101.1012


Floating-point Form (Assignment)
• Convert 1/10 into binary floating-point form.
• Calculate the absolute error after convert 1/10 into
binary floating-point form.
TruncaLon Error
• It refers to the error that arises when an infinite process is
approximated by a finite one.
• It occurs because the exact mathematical procedure or
function is "truncated" or cut off at a certain point, leaving
out terms or steps that would be required for a perfectly
accurate solution
Truncation Error
Example:
• Taylor Series Approximation: Suppose you want to approximate the function
ex using its Taylor series expansion around x=0:

If you truncate this series after the first two terms to approximate ex as 1 + x,
the difference between the true value and this approximation is the
truncation error.

• Numerical Integration: When you approximate the integral of a function using


a finite number of intervals (like with the trapezoidal rule), the difference
between the exact integral and the approximated value is due to truncation
error.
Taylor Series
The Taylor series is a powerful mathematical tool used to approximate functions
that are smooth near a specific point. It expresses a function as an infinite sum of
terms calculated from the function's derivatives at a single point.
Why use Taylor series?
1. Approximation: The Taylor series allows us to approximate complex
functions with simpler polynomial expressions, which are easier to
work with in calculations.
2. Error Estimation: The difference between the actual function and
the Taylor series approximation can be understood and controlled,
making it a useful tool in numerical analysis.
3. Analytical Insight: By examining the Taylor series of a function, you
can gain insights into its behavior near a specific point, such as its
rate of change (slope), curvature, and higher-order characteristics.
Maclaurin Series
One of the most famous examples is the Taylor series for
the exponential function ex around x = 0 (also known as
the Maclaurin series, a special case of the Taylor series):
Maclaurin Series
One of the most famous examples is the Taylor series for
the exponential function ex around x = 0 (also known as
the Maclaurin series, a special case of the Taylor series):
Maclaurin Series
Solve ex when x = 2.
• e2 = 7.389056099 (true value)
• Using Maclaurin Series, the
approximate answers are shown in
the table.
• The truncation error is the difference
between the true value and the
approximated result.
Truncation Error
How to reduce truncation errors?
Truncation error can often be reduced by including more
terms in the approximation or taking more steps in an
iterative process. For example, using more terms in the
Taylor series or finer intervals in numerical integration
decreases truncation error.
Any ques)ons?
Thanks!
Do you have any questions?
[email protected]

CREDITS: This presentation template was created by Slidesgo, and


includes icons by Flaticon, and infographics & images by Freepik

Please keep this slide for attribution


Numerical Methods
and Analysis
Course Description and Specifications

You might also like