0% found this document useful (0 votes)
18 views

Numerical Analysis WEEK 1

Uploaded by

Maheen Munir
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Numerical Analysis WEEK 1

Uploaded by

Maheen Munir
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

WEEK 1 NUMERICAL ANALYSIS

Numerical Analysis deals with the process of getting the numerical solution to
complex problems. Most of the Mathematical problems that arise in science and
engineering are very hard and sometime impossible to solve exactly. Thus, an
approximation to a difficult Mathematical problem is very important to make it more
easy to solve. Due to the immense development in the computational technology,
numerical approximation has become more popular and a modern tool for scientists
and engineers. As a result many scientific softwares are developed (for instance,
Matlab, Mathematica, Maple etc.) to handle more difficult problems in an efficient
and easy way.
“Numerical Analysis is the branch of mathematics that provides tools and methods for solving
mathematical problems in numerical form.”

Introduction to Numerical Analysis


Numerical analysis is a discipline of mathematics concerned with the development
of efficient methods for getting numerical solutions to complex mathematical
problems. There are three sections to the numerical analysis. The first section of the
subject deals with the creation of a problem-solving approach. The analysis of
methods, which includes error analysis and efficiency analysis, is covered in the
second section. The efficiency analysis shows us how fast we can compute the result,
while the error analysis informs us how correct the result will be if we utilize the
approach. The construction of an efficient algorithm to implement the approach as a
computer code is the subject’s third part. All three elements must be familiar to have
a thorough understanding of the numerical analysis.
Meanwhile, there are at least three reasons to learn the theoretical foundations of
numerical methods:

1. Learning various numerical methods and analyzing them will familiarize a


person with the process of inventing new numerical methods. When the
existing approaches are insufficient or inefficient to handle a certain problem,
this is critical.
2. In many cases, there are multiple solutions to a problem. As a result, using the
right procedure is critical for getting a precise answer in less time.
3. With a solid foundation, one can effectively apply methods (especially when
a technique has its own restrictions and/or drawbacks in certain instances)
and, more significantly, analyze what went wrong when results did not meet
expectations.

Numerical analysis include three parts. The first part of the subject is about the
development of a method to a problem. The second part deals with the analysis of
the method, which includes the error analysis and the efficiency analysis. Error
analysis gives us the understanding of how accurate the result will be if we use the
method and the efficiency analysis tells us how fast we can compute the result. The
third part of the subject is the development of an efficient algorithm to implement
the method as a computer code.

“In numerical analysis we are mainly interested in implementation and analysis of


numerical algorithms for finding an approximate solution to a mathematical
problem.”
NUMERICAL ALGORITHM
A complete set of procedures which gives an approximate solution to a mathematical
problem.

CRITERIA FOR A GOOD METHOD

1) Number of computations i.e. Addition, Subtraction, Multiplication and Division.


2) Applicable to a class of problems.
3) Speed of convergence.
4) Error management.
5) Stability.

NUMERICAL ITERATION METHOD


A mathematical procedure that generates a sequence of improving approximate
solution for a class of problems i.e. the process of finding successive
approximations.

ALGORITHM OF ITERATION METHOD


A specific way of implementation of an iteration method, including to termination
criteria is called algorithm of an iteration method.
In the problem of finding the solution of an equation, an iteration method uses as
initial guess to generate successive approximation to the solution.

CONVERGENCE CRITERIA FOR A NUMERICAL COMPUTATION


If the method leads to the value close to the exact solution, then we say that the
method is convergent otherwise the method is divergent. i.e., 𝐥𝐢𝐦 𝒙𝒏 = 𝒓
𝒏⇀∞

Why we use numerical iterative methods for solving equations?


As analytic solutions are often either too tiresome or simply do not exist, we need to
find an approximate method of solution. This is where numerical analysis comes
into picture.
LOCAL CONVERGENCE

An iterative method is called locally convergent to a root, if the method converges


to root for initial guesses sufficiently close to root.
STEP SIZE, STEP COUNT, INTERVAL GAP
𝒃−𝒂
The common difference between the points i.e. h = = 𝒕𝒊+𝟏 − 𝒕𝒊 is called Step-
𝒏
Size.

Numerical Computing Process


 Construction of a Mathematical model.
 Construction of an appropriate numerical system.
 Implementation of a solution.
 Verification of the solution.

Numerical Computing Characteristics


 Accuracy: Every numerical method introduces errors. It may be due to the
use of the proper Mathematical process or due to accurate representation
and change of numbers on the computer.
 Efficiency: Another consideration in choosing a numerical method for a
Mathematical model solution efficiency Means the amount of effort required
by both people and computers to use the method.
 Numerical instability: Another problem presented by a numerical method
is numerical instability. Errors included in the calculation, from any source,
increase in different ways. In some cases, these errors are usually rapid,
resulting in catastrophic results.
Note:
If a numerical method is not affected by the round-off error and results converge to
a solution, we say the method is stable.

ERROR
Error is a term used to denote the amount by which an approximation fails to
equal the exact solution.
SOURCE OF ERRORS
Numerically computed solutions are subject to certain errors. Mainly there are
three types of errors.
1. Inherent errors
2. Truncation errors
3. Round Off errors

INHERENT (EXPERIMENTAL) ERRORS


Errors arise due to assumptions made in the mathematical modeling of problems.
Also arise when the data is obtained from certain physical measurements of the
parameters of the problem i.e. errors arising from measurements.
TRUNCATION ERRORS
Errors arise when approximations are used to estimate some quantity.
These errors corresponding to the facts that a finite (infinite) sequence of
computational steps necessary to produce an exact result is “truncated” prematurely
after a certain number of steps.
How Truncation error can be removed?
Use exact solution.
Error can be reduced by applying the same approximation to a larger number of
smaller intervals or by switching to a better approximation.

ROUND OFF ERRORS

Errors arising from the process of rounding off during computations.

These are also called “Chopping” i.e. discarding all decimals from some decimals
on.

Absolute Error
Absolute Error is the magnitude of the difference between the true value “a”
and the approximate value “ā”. The error between two values is defined as:
Absolute Error=|Approximate value -True Value|
=|ā - a|
RELATIVE ERRORS
If “ā” is an approximate value of a quantity whose exact value is
“a” then relative error of “ā” is defined by
|𝐀𝐩𝐩𝐫𝐨𝐱𝐢𝐦𝐚𝐭𝐞 𝐯𝐚𝐥𝐮𝐞 −𝐓𝐫𝐮𝐞 𝐕𝐚𝐥𝐮𝐞|
Relative Error = 𝐓𝐫𝐮𝐞 𝐕𝐚𝐥𝐮𝐞
|ā − 𝐚 |
Relative Error =
𝐚

Percentage Error
If “ā” is an approximate value of a quantity whose exact value is
“a” then Percentage error of “ā” is defined as:
|𝐀𝐩𝐩𝐫𝐨𝐱𝐢𝐦𝐚𝐭𝐞 𝐯𝐚𝐥𝐮𝐞 −𝐓𝐫𝐮𝐞 𝐕𝐚𝐥𝐮𝐞|
Percentage error = X100
𝐓𝐫𝐮𝐞 𝐕𝐚𝐥𝐮𝐞

ROOTS (SOLUTION) OF AN EQUATION OR ZEROES OF A FUNCTION

Those values of “x” for which f(x) = 0 is satisfied are called root of an
equation. Thus “a” is root of f(x) = 0 iff f(a) = 0

ALGEBRAIC EQUATION
The equation f(X) = 0 is called an algebraic equation if it is
purely a polynomial in “x”. e.g.
𝑥 3 + 5𝑥 2 − 6𝑥 + 3 = 0
TRANSCENDENTAL EQUATION
The equation f(x) = 0 is called transcendental equation if it contains
Trigonometric, Inverse trigonometric, Exponential, Hyperbolic or
Logarithmic functions.e.g.
i. M = 𝑒 𝑥 -𝑒 𝑥 sinx
2 𝑥
ii. 𝑎𝑥 + log(x-3) +𝑒 sinx = 0

PROPERTIES OF ALGEBRAIC EQUATIONS


1. Every algebraic equation of degree “n” has “n” and only “n” roots.
e.g.

x2 - 1=0 has distinct roots i.e. 1, -1


x2+2x+1 = 0 has repeated roots i.e. -1, -1
x2+1 = 0 has complex roots i.e. +і, -і
2. Complex roots occur in pair. i.e. (a+bі) and (a-bі) are roots of f(x)=0
3. If x = a is a root f(x)=0, a polynomial of degree “n” then (x-a) is factor
of f(x)=0 on dividing f(x) by (x-a) we obtain polynomial of degree (n-1).

REMARK
There are two types of methods to find the roots of Algebraic
and Transcendental equations.
(i) DIRECT METHODS
(ii) INDIRECT (ITERATIVE) METHODS
DIRECT METHODS
1. Direct methods give the exact value of the roots in a finite number of
steps.
2. These methods determine all the roots at the same time assuming no
round off errors.
3. In the category of direct methods; Elimination Methods are
advantageous because they can be applied when the system is large.
INDIRECT (ITERATIVE) METHODS
1. These are based on the concept of successive approximations. The
general procedure is to start with one or more approximation to the
root and obtain a sequence of iterates “x” which in the limit converges
to the actual or true solution to the root.
2. Indirect Methods determine one or two roots at a time.
3. Rounding error have less effect.
4. These are self-correcting methods.
5. Easier to program and can be implemented on the computer.

REMEMBER: Indirect Methods are further divided into two categories

I. BRACKETING METHODS
II. OPEN METHODS
BRACKETING METHODS
These methods require the limits between which the root lies. e.g. Bisection
method, False position method.
OPEN METHODS
These methods require the initial estimation of the solution. e.g.
Newton Raphson method.
ADVANTAGES AND DISADVANTAGES OF BRACKETING METHODS
 Bracket methods always converge.
 The main disadvantage is, if it is not possible to bracket the root,
the method cannot applicable.

GEOMETRICAL ILLUSTRATION OF BRACKET FUNCTIONS


 In these methods we choose two points “xn” and “xn-1” such that
f(xn) and f(xn-1) are of opposite signs.
 Intermediate value property suggests that the graph of “y=f(x)”
crosses the x-axis between these two points, therefore a root (say)
“x=xr” lies between these two points.

REMARK
Always set your calculator at radian mod while solving
Transcendental or Trigonometric equations.

INTERMEDIATE VALUE THEOREM

If f(x) is continuous on [a, b] and f(a) and f(b) have opposite signs
then f(x) =0 has atleast one real root between a and b.
This result is very simple to use. We set up a table of values of f (x) for various
values of x. Studying the changes in signs in the values of f (x), we determine the
intervals in which the roots lie. For example, if f (1) and f (2) are of opposite
signs, then there is a root in the interval (1, 2).
Methods for Solving Non-Linear Equations

1. Bisection Method
2. Iteration Method or Fixed Point Iteration
3. Regular Falsi Method or Method of False Position
4. Newton-Raphson Method
5. Secant Method
MERITS OF BISECTION METHOD

1. The iteration using bisection method always produces a root, since the
method brackets the root between two values.
2. As iterations are conducted, the length of the interval gets halved. So
one can guarantee the convergence in case of the solution of the
equation.
3. Bisection method is simple to program in a computer.

DEMERITS OF BISECTION METHOD


1. The convergence of bisection method is slow as it is simply based on
halving the interval.
2. Cannot be applied over an interval where there is discontinuity.
3. Cannot be applied over an interval where the function takes always
value of the same sign.
4. Method fails to determine complex roots (give only real roots)
5. If one of the initial guesses “a0” or “b0” is closer to the exact solution,
it will take larger number of iterations to reach the root.

EXAMPLE
Solve equation 𝒙𝟑 + 𝒙 − 𝟏 = 𝟎 using Bisection Method
SOLUTION

Let f(x)= 𝒙𝟑 + 𝒙 − 𝟏

x 0 0.5 1
f(x) -1 -0.375 1

f(0.5) < 0 and f(1) > 0 (opposite signs)

so, root lies between 0.5 and 1

Ist Iteration
𝟎.𝟓+𝟏
𝒙𝟏 = =0.75
𝟐

f(0.75)= (𝟎. 𝟕𝟓)𝟑 + 𝟎. 𝟕𝟓 − 𝟏

= 0.1719 >0

f(0.5) < 0 and f(0.75)>0 (opposite signs)

f(0.5) f(0.75)<0

so, root lies between 0.5 and 0.75


2nd Iteration
𝟎.𝟓+𝟎.𝟕𝟓
𝒙𝟐 = =0.625
𝟐

f(0.625)= (𝟎. 𝟔𝟐𝟓)𝟑 + 𝟎. 𝟔𝟐𝟓 − 𝟏

=- 0.1309 <0

f(0.625) < 0 and f(0.75)>0 (opposite signs)

f(0.625) f(0.75)<0

so, root lies between 0.625 and 0.75

3rd Iteration
𝟎.𝟔𝟐𝟓+𝟎.𝟕𝟓
𝒙𝟑 = =0.6875
𝟐

f(0.6875)= (𝟎. 𝟔𝟖𝟕𝟓)𝟑 + 𝟎. 𝟔𝟖𝟕𝟓 − 𝟏

=0.0125 >0

f(0.625) < 0 and f(0.6875)>0 (opposite signs)

f(0.625) f(0.6875)<0

so, root lies between 0.625 and 0.6875

4th Iteration
𝟎.𝟔𝟐𝟓+𝟎.𝟔𝟖𝟕𝟓
𝒙𝟒 = =0.6563
𝟐

f(0.6563)= (𝟎. 𝟔𝟓𝟔𝟑)𝟑 + 𝟎. 𝟔𝟓𝟔𝟑 − 𝟏

=-0.0611<0

f(0.6563) < 0 and f(0.6875)>0 (opposite signs)


f(0.6563) f(0.6875)<0

so, root lies between 0.6563 and 0.6875

5th Iteration
𝟎.𝟔𝟓𝟔𝟑+𝟎.𝟔𝟖𝟕𝟓
𝒙𝟓 = =0.6719
𝟐

f(0.6719)= (𝟎. 𝟔𝟕𝟏𝟗)𝟑 + 𝟎. 𝟔𝟕𝟏𝟗 − 𝟏

=-0.0248<0

f(0. 6719) < 0 and f(0.6875)>0 (opposite signs)

f(0.6719) f(0.6875)<0

so, root lies between 0.6719 and 0.6875

6th Iteration
𝟎.𝟔𝟕𝟏𝟗+𝟎.𝟔𝟖𝟕𝟓
𝒙𝟔 = =0.6797
𝟐

f(0.6797)= (𝟎. 𝟔𝟕𝟗𝟕)𝟑 + 𝟎. 𝟔𝟕𝟗𝟕 − 𝟏

=-0.0141<0

f(0. 6797) < 0 and f(0.6875)>0 (opposite signs)

f(0.6797) f(0.6875)<0

so, root lies between 0.6797 and 0.6875

7th Iteration
𝟎.𝟔𝟕𝟗𝟕+𝟎.𝟔𝟖𝟕𝟓
𝒙𝟕 = =0.6836
𝟐
f(0.6836)= (𝟎. 𝟔𝟖𝟑𝟔)𝟑 + 𝟎. 𝟔𝟖𝟑𝟔 − 𝟏

=0.0031>0

f(0. 6797) < 0 and f(0.6836)>0 (opposite signs)

f(0.6797) f(0.6836)<0

so, root lies between 0.6797 and 0.6836

8th Iteration
𝟎.𝟔𝟕𝟗𝟕+𝟎.𝟔𝟖𝟑𝟔
𝒙𝟖 = =0.6817
𝟐

f(0. 6817)= (𝟎. 𝟔𝟖𝟏𝟕)𝟑 + 𝟎. 𝟔𝟖𝟏𝟕 − 𝟏

=-0.0016<0

f(0. 6817) < 0 and f(0.6836)>0 (opposite signs)

f(0.6817) f(0.6836)<0

so, root lies between 0.6817 and 0.6836

9th Iteration
𝟎.𝟔𝟖𝟏𝟕+𝟎.𝟔𝟖𝟑𝟔
𝒙𝟗 = =0.6826
𝟐

f(0. 6826)= (𝟎. 𝟔𝟖𝟐𝟔)𝟑 + 𝟎. 𝟔𝟖𝟐𝟔 − 𝟏

=0.0007>0

f(0. 6817) < 0 and f(0.6826)>0 (opposite signs)


f(0.6817) f(0.6826)<0

so, root lies between 0.6817 and 0.6826

10th Iteration
𝟎.𝟔𝟖𝟏𝟕+𝟎.𝟔𝟖𝟐𝟔
𝒙𝟏𝟎 = =0.6822
𝟐

f(0. 6822)= (𝟎. 𝟔𝟖𝟐𝟐)𝟑 + 𝟎. 𝟔𝟖𝟐𝟐 − 𝟏

=-0.0004<0

f(0. 6822) < 0 and f(0.6826)>0 (opposite signs)

f(0.6822) f(0.6826)<0

so, root lies between 0.6822 and 0.6826

We continue in this manner until the required accuracy is achieved.

EXAMPLE

Solve 𝒙𝟑 -9x+1 for roots between x=2 and x=4

SOLUTION
EXAMPLE

Use bisection method to find out the roots of the function describing
to drag coefficient of parachutist given by
SOLUTION

EXAMPLE
Solve equation 𝒔𝒊𝒏𝒙 − 𝟓𝒙 + 𝟐 = 𝟎 using Bisection Method to find the
correct solution upto 4 decimal point.

SOLUTION

Let f(x)= 𝒔𝒊𝒏𝒙 − 𝟓𝒙 + 𝟐

X 0 0.2 0.4 0.6


f(x) 2 1.1987 0.3894 -0.4354

f(0.4) < 0 and f(0.6) > 0 (opposite signs)

so, root lies between 0.4 and 0.6

Ist Iteration
𝟎.𝟒+𝟎.𝟔
𝒙𝟏 = =0.5
𝟐
f(0.5)= 𝒔𝒊𝒏(𝟎. 𝟓) − 𝟓(𝟎. 𝟓) + 𝟐

= -0.0206<0

f(0.4) < 0 and f(0.5)>0 (opposite signs)

f(0.4) f(0. 5)<0

so, root lies between 0.4 and 0.5

2nd Iteration
𝟎.𝟒+𝟎.𝟓
𝒙𝟐 = =0.45
𝟐

f(0.45)= 𝒔𝒊𝒏(𝟎. 𝟒𝟓) − 𝟓(𝟎. 𝟒𝟓) + 𝟐

=0.1856 >0

f(0.5) < 0 and f(0.45)>0 (opposite signs)

f(0.5) f(0.45)<0

so, root lies between 0.45 and 0.5

3rd Iteration
𝟎.𝟒𝟓+𝟎.𝟓
𝒙𝟑 = =0.475
𝟐

f(0.475)= 𝒔𝒊𝒏(𝟎. 𝟒𝟕𝟓) − 𝟓(𝟎. 𝟒𝟕𝟓) + 𝟐

=0.0823 >0

f(0.5) < 0 and f(0.475)>0 (opposite signs)

f(0.5) f(0.475)<0

so, root lies between 0.475 and 0.5


4th Iteration
𝟎.𝟒𝟕𝟓+𝟎.𝟓
𝒙𝟒 = =0.4875
𝟐

f(0.4875)= 𝒔𝒊𝒏(𝟎. 𝟒𝟖𝟕𝟓) − 𝟓(𝟎. 𝟒𝟖𝟕𝟓) + 𝟐

=0.0309 >0

f(0.5) < 0 and f(0.4875)>0 (opposite signs)

f(0.5) f(0.4875)<0

so, root lies between 0.4875 and 0.5

5th Iteration
𝟎.𝟒𝟖𝟕𝟓+𝟎.𝟓
𝒙𝟓 = =0.4938
𝟐

f(0.4938)= 𝒔𝒊𝒏(𝟎. 𝟒𝟗𝟑𝟖) − 𝟓(𝟎. 𝟒𝟗𝟑𝟖) + 𝟐

=0.0049 >0

f(0.5) < 0 and f(0.𝟒𝟗𝟑𝟖)>0 (opposite signs)

f(0.5) f(0. 𝟒𝟗𝟑𝟖)<0

so, root lies between 0.4938 and 0.5

6th Iteration
𝟎.𝟒𝟗𝟑𝟖+𝟎.𝟓
𝒙𝟔 = =0.4969
𝟐

f(0.4969)= 𝒔𝒊𝒏(𝟎. 𝟒𝟗𝟔𝟗) − 𝟓(𝟎. 𝟒𝟗𝟔𝟗) + 𝟐

=-0.0078 <0

f(0.4969) < 0 and f(0.𝟒𝟗𝟑𝟖)>0 (opposite signs)


f(0.4969) f(0. 𝟒𝟗𝟑𝟖)<0

so, root lies between 0.4969 and 0.4938

7th Iteration
𝟎.𝟒𝟗𝟑𝟖+𝟎.𝟒𝟗𝟔𝟗
𝒙𝟕 = =0.4954
𝟐

f(0.4954)= 𝒔𝒊𝒏(𝟎. 𝟒𝟗𝟓𝟒) − 𝟓(𝟎. 𝟒𝟗𝟓𝟒) + 𝟐

=-0.0017 <0

f(0.4954) < 0 and f(0.𝟒𝟗𝟑𝟖)>0 (opposite signs)

f(0.4954) f(0. 𝟒𝟗𝟑𝟖)<0

so, root lies between 0.4954 and 0.4938

8th Iteration
𝟎.𝟒𝟗𝟑𝟖+𝟎.𝟒𝟗𝟓𝟒
𝒙𝟖 = =0.4946
𝟐

f(0.4946)= 𝒔𝒊𝒏(𝟎. 𝟒𝟗𝟒𝟔) − 𝟓(𝟎. 𝟒𝟗𝟒𝟔) + 𝟐

=0.0017 >0

f(0.4954) < 0 and f(0.𝟒𝟗𝟒𝟔)>0 (opposite signs)

f(0.4954) f(0. 𝟒𝟗𝟒𝟔)<0

so, root lies between 0.4954 and 0.4946

9th Iteration
𝟎.𝟒𝟗𝟒𝟔+𝟎.𝟒𝟗𝟓𝟒
𝒙𝟗 = =0.4950
𝟐

f(0.4950)= 𝒔𝒊𝒏(𝟎. 𝟒𝟗𝟓𝟎) − 𝟓(𝟎. 𝟒𝟗𝟓𝟎) + 𝟐


=0.0000

So, x=0.4950 is the required approximate root.

(ix) 𝟐𝒙 − 𝟓𝒙 + 𝟐 = 𝟎 ans: 0.7322

(x) 𝟐𝒆−𝒙 − 𝒔𝒊𝒏𝒙 = 𝟎 ans: 0.92

(xi) 𝟐𝒙𝟑 + 𝒙 − 𝟐 = 𝟎 ans: 0.8351

You might also like