0% found this document useful (0 votes)
28 views5 pages

Numerical Method Lec1 With Activity

The document discusses numerical methods and analysis, detailing their applications in solving equations, interpolation, integration, and differential equations. It explains the concepts of approximation, round-off errors, significant figures, accuracy, precision, and error definitions. Additionally, it covers the bisection method for finding roots of functions, providing examples and algorithms for implementation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views5 pages

Numerical Method Lec1 With Activity

The document discusses numerical methods and analysis, detailing their applications in solving equations, interpolation, integration, and differential equations. It explains the concepts of approximation, round-off errors, significant figures, accuracy, precision, and error definitions. Additionally, it covers the bisection method for finding roots of functions, providing examples and algorithms for implementation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

NUMERICAL METHOD AND ANALYSIS

Numerical method is a mathematical tool designed to solve numerical problems. The


implementation of a numerical method with an appropriate convergence check in a programming
language is called a numerical algorithm
Numerical analysis is used in:
1. Solving linear/non-linear equations and finding the real roots, many methods exist like:
Bisection, Newton-Raphson ... etc.
2. Fit some points to curve, good approximation and simple solution.
3. Interpolation, great to get any value in between a table of values. It can solve the
equally spaced readings for unequally spaced methods, Newton general method is
implied.
4. Solve definite integration, simple method is used to compute integration based on idea
that the definite integration is the bounded area by the given curve, these methods
approximate the area with great approximation. Many methods there, like Simpson’s
rule.
5. Solving initial value 1st and 2nd order differential equations, good approximation and
simpler than normal analysis.
6. Solving partial differential equations like Laplace equation for wave equation, very fast
solution.

APPROXIMATION AND ROUND OFF ERRORS

Numerical technique yielded because it is close to the exact analytical solution; there was a
discrepancy or error because the numerical method involved an approximation. The availability
of an analytical solution allowed computing the error exactly.

SIGNIFICANT FIGURES
The concept of a significant figure or digit has been developed to formally designate the
reliability of a numerical value. The significant digits of a number are those that can be used with
confidence.

Two important implications for our study of numerical methods:


1. As introduced in the falling parachutist problem, numerical methods yield approximate results.
We must, therefore, develop criteria to specify how confident we are in our approximate result.
One way to do this is in terms of significant figures. For example, we might decide that our
approximation is acceptable if it is correct to four significant figures.
2. Although quantities such as p, e, or 17 represent specific quantities, they cannot be expressed
exactly by a limited number of digits.

ACCURACY AND PRECISION

Accuracy- refers to how closely a computed or measured values agrees with the true value
Precision- refers to how closely individual computed or measured values agree with each other
ERROR DEFINITION
Numerical Errors arise from the use of approximations to represent exact mathematical
operations and quantities

1. Truncation Errors- result when approximations are used to represent exact mathematical
procedures
2. Round off Errors- result when numbers having limited significant figures are used to
represent exact numbers

For both types, the relationship between the exact, or true, result and the approximation can be
formulated as
True value = approximation + error
By rearranging
Et = true value – approximation

Where, Et is used to designate the exact value of the error. The subscript t is included to
designate that this is the “true” error. This is in contrast to other cases, as described shortly,
where an “approximate” estimate of the error must be employed
true error
True fractional relative error = true value
Where, error = true value – approximation

True Percent Relative Error or ϵT


true error
ϵT = true value
x 100%

Problem
Suppose that you have the task of measuring the lengths of a bridge and a rivet and come up with
9999 and 9 cm, respectively. If the true values are 10,000 and 10 cm, respectively, compute (a)
the true error and (b) the true percent relative error for each case.

Solution
(a) The error for measuring the bridge
Et = 10,000 - 9999 = 1 cm and for the rivet it is
Et = 10 - 9 = 1 cm

(b) The percent relative error for the bridge


1
ϵT = 10000 x 100% = 0.01 % and for the rivet it is

1
ϵT = 10
x 100% = 10 %
current approximation−previous approximation
ϵa = current approximation
x 100

Subscript a - signifies that the error is normalized to an approximate value


Subscript t - signifies that the error is normalized to the true value

Note:

1. If the approximation is greater than the true value (or the previous approximation is
greater than the current approximation), the error is negative
2. If the approximation is less than the true value, the error is positive
3. The computation is repeated until
ϵa < ϵs
ϵs = pre specified percent tolerance
4. It is also convenient to relate errors to the number of significant figures in the
approximation. It can be assured that the result is correct to at least n significant figures
ϵs = (0.5 x 102-n) %

Problem Statement
In mathematics, functions can often be represented by infinite series. For example, the
exponential function can be computed using
�2 �3 ��
�� = 1 + � + + + …. +
2 3! �!
Thus, as more terms are added in sequence, the approximation becomes a better and better
estimate of the true value of ex. Equation is called a Maclaurin series expansion. Starting with the
simplest version, ex = 1, add terms one at a time to estimate e0.5. After each new term is added,
compute the true and approximate percent relative errors. The true value is e0.5 = 1.648721 . . . .
Add terms until the absolute value of the approximate error estimate falls ϵa below a ϵs
conforming to three significant figures.
BISECTION METHOD
Alternatively called binary chopping, interval halving or Bolzano’s Method is one type of
incremental search method in which the interval is always divided in half. It consists of
finding two such numbers a and b, then halving the interval [a, b] and keeping the half on
which f(x) changes sign and repeating the procedure until the interval shrinks to give the
required accuracy for the root.
An algorithm could be defined as follows:
Suppose we need a root for f(x) = 0 and we have an error tolerance of ϵ (absolute error in
calculating the root must be less than ϵ)
Step 1: Find two numbers a and b at which f has different signs.
�+�
Step 2: Define � = 2
Step 3: If b-c ≤ ∈ then accept c as the root and stop
Step 4: If f(a) f(c) ≤ 0, then set c as the new b. Otherwise, set c as the new a. Return to step 1

Example 1

Consider finding the root of f(x) = x2 - 3. Let εstep = 0.01, εabs = 0.01 and start with the interval [1,
2].

Table 1. Bisection method applied to f(x) = x2 - 3.

a b f(a) f(b) c = (a + b)/2 f(c) Update new b − c


1.0 2.0 -2.0 1.0 1.5 -0.75 a=c 0.5
1.5 2.0 -0.75 1.0 1.75 0.062 b=c 0.25
1.5 1.75 -0.75 0.0625 1.625 -0.359 a=c 0.125
1.625 1.75 -0.3594 0.0625 1.6875 -0.1523 a = c 0.0625
1.6875 1.75 -0.1523 0.0625 1.7188 -0.0457 a = c 0.0313
1.7188 1.75 -0.0457 0.0625 1.7344 0.0081 b = c 0.0156
1.71988/td> 1.7344 -0.0457 0.0081 1.7266 -0.0189 a = c 0.0078
Thus, with the seventh iteration, we note that the final interval, [1.7266, 1.7344], has a width
less than 0.01 and |f(1.7344)| < 0.01, and therefore we chose b = 1.7344 to be our
approximation of the root.
Example 2

Consider finding the root of f(x) = e-x(3.2 sin(x) - 0.5 cos(x)) on the interval [3, 4], this time with
εstep = 0.001, εabs = 0.001.

Table 1. Bisection method applied to f(x) = e-x(3.2 sin(x) - 0.5 cos(x)).

c = (a + b)/ Updat new b


a b f(a) f(b) f(c)
2 e −c

You might also like