ChE 110
Advanced Engineering Mathematics in Chemical
Engineering
Module 1
ROOT-FINDING (solution to non-linear equations)
Simplest method:
Always plot or graph to narrow your search for the real root
1 root within this area
BRACKETTING METHODS
1. Bisection – oldest method; figured out by the Greeks in 1000 BC
2. Regula Falsi – 1700 (middle ages)
3. Advance methods (1970)
a. Pegasus
b. Kings
c. University of Illinois
Bracketting methods
- are based on two initial guesses that bracket the root. It always works
but converges slowly (more iterations are needed).
- Bisection, false position, graphing
Open methods
- involve one or more initial guesses but there is no need for them to
bracket the root.
- do not always work (since they can diverge) but when they do they
usually converge faster
- Secant method, Newton-Raphson, inverse quadratic interpolation
BISECTION METHOD
1. Choose an interval which contains the root – make sure there is only 1
root, if 2, change the interval
2. Evaluate the function f(x) at the midpoint
3. Determine which interval to discard
4. Go back to step 2
5. When tolerance is reduced, report the root as the midpoint of the interval
+ +midpoint -
Discard this New Interval
Should always have a positive (+) and negative (-) root
Discard 50% of the interval or those with same sign
Example: f ( x )=x 2−4
Initial guess:
x 1=1 good guess because there is a root in between
x 2=3
Or: 0 1.5 not -3
3 2.5 -1
x1 + x 2
x❑= =2
2
2
f ( 2 )=2 −4=0(root is obtained)
Slowest because domain is only cut by 50% (not 99%)
Bisection – a technique that does not care about the problem.
6
5
4
3
2
1
0
0.5 1 1.5 2 2.5 3 3.5
-1
-2
-3
-4
FIXED POINT ITERATION
- Method of successive substitution
x=f ( x ) Its utility is that it provides a formula to predict a new value
of x as a function of an old value of x
The 1st way to incorporate such info is to consider a straight line
approximation to the function. Since we know how to find the zero of a
linear function, it is not much more work to find approximation to the zero
of f(x) not as the midpoint of the interval, but as the point where the
straight-line approximation to f crosses the x-axis.
In algebra: y=f ( x )
SIMPLE FIXED-POINT ITERATION (ONE-POINT ITERATION/SUCCESSIVE
SUBSTITUTION)
- Rearrange the function f(x)=0 so that x is on the left-hand side of
the equation: x=g ( x ) (1)
- This transformation can be accomplished either by algebraic
manipulation or by simply adding x to both sides of the original
equation.
- The utility of equation (1) is that it provides a formula to predict a
new value of x as a function of an old value of x. Thus, given an
initial guess at the root Xi, equation (1) can be used to compute a
new estimate Xi as expressed by the iterative formula:
x i+1=g ( x i )
(If x is ChE 110, to pass ChE 110, you must have taken ChE 110)
2
x −4=0
2
x =4
x=√ 4 ( not allowed )
4
x= (allowed)
x
4
try 2 :2= =2
2
4 4
Try 3: x= x=
3 1.3333
x=1.333 x=3 chaos theory 2nd OE
Find a root of x 4 −x−10=0 using the fixed point iterative scheme
10
a) x= 3
x −1
1
b) x=( x +10 ) 4
c) x=¿ ¿
try 10 iterations for each
Numerical Methods:
1. Direct Method (Gaussian Elimination)
- produces an answer to a problem in a fixed number of
computational steps
2. Iterative Method
- Produces a sequence of approximate answers (designed to
converge ever closer to the true solution, under the proper
condition)
b−a
s=b− f (b) b=x 1 a=x 2 s=x 3
f ( b )−f ( a )
REGULA FALSI METHOD (rule of false positions)
Algebra comes from Arabs
f ( x 1 )(x 2−x 1) b−a
x 3=x 1− s=b− f (b)
f ( b )−f (a) f ( b )−f ( a )
b=x 1 a=x 2 s=x 3
Can move up & down (not limited to 50%)
2
f ( x )=x −4
Initial guess: x 1=0
x 2=3
10 Steps – compare bisection & regula falsi
- proceeds as in bisection to find the sub interval [a, S] or [S, b] that
contains the zero by testing for a change of sign of the function, ie, testing
whether f(a).f(S) < 0 or f(S).f(b) < 0.
- if there is 0 in the interval [a, S], we leave the value of a unchanged and
set b=S.
- On the other hand, if there is no 0 in [a, S], the zero must be in the interval
[S, b], so we set a = S and leave b unchanged.
STEPS IN REGULA FALSI:
1. Define function, f(x), whose zero (0) is desired.
2. Define interval [a, b] containing the desired zero; compute f(a) and f(b)
3. Compute approximate solution equation above
compute f(S), if f(S)=0, stop (S is the desired zero)
4. Determine if zero is in [a,S] or in [S,b] if f(a).f(S) <0, then zero is in [a,S].
Otherwise, zero is in [S,b]
5. Record current values of a, S, b, f(a), f(S), f(b) in the table. If continue,
change definition of a or b in Step 2 as follows:
If zero is in [a,S], redefine b to have the current value of S.
If zero is in [S,b], redefine a to have the current value of S.
Example: To find a numerical approximation to √3 2 we seek the zero of
3
y=f ( x )=x −2 with the 1st approximation in the figure below:
5 x y
4
3 0 -1
2
1.2 -0.56
y
1
0
-1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 1.4 0.784
-2
x
1.6 2.046
1.8 3.832
2.0 6
graph of y=x 3−2 and approximation line in the interval (1 , 2)
Begin by defining the function: f ( x)=x3 −2
Begin the computation by giving the initial bounds on the zero: a=1 b=2
evaluate the function at these points:
3
f (1)=1 −2=−1
3
f (2)=2 −2=6
f (b)(b−a) 6 (2−1)
s=b− =2− =1.143
f (b)−f (a) 6−(−1)
a b S f(a) f(b) f(S)
1 2 1.143 −1 6 −0. 507
1.143 2 1.21 −0. 507 6 −0.23
1.21 2 1.239 −0.23 6 −0.098
1.239 2 1.251 -0.098 6 -0.042
1.251 2 1.2562 −0.0422 6 -0.0177
1.2562 2 1.2584 −0.0177 6 −7.2348 x 10
−3
−3 −3
1.2584 2 1.2593 −7.2348 x 10 6 −2.9561 x 10
−3 −3
1.2593 2 1.2597 −2.9561 x 10 6 −1.0525 x 10
−3 −4
1.2597 2 1.2598 −1.0525 x 10 6 −5.7641 x 10
−4 −4
1.2598 2 1.2599 −5.7641 x 10 6 −1.0024 x 10
−4 −4
1.2599 2 1.2599 −1.0024 x 10 6 −1.0024 x 10
NEWTON’S METHOD (NEWTON - RAPHSON)
- most used
- derived graphically (but does not point out the weakness of this
method) – tries to find roots that do not exist
- gets the imaginary root, not the real one
Taylor’s series expansion (assume everything is
close to answer-root)
n
f (x)=f ( x ¿¿ i)+ f '(x i )( x i+1−x i)+ f (x )(xi )(x i+1−x i )+ ... ¿
negligible & if evaluated at the root, f(x)=0
f (x i )
x i+1=x i− (1) Newton’s method
f ' (x ¿¿ i)¿
Example:
2
f (x)=x −4
f ' (x)=2 x
2 2 2
x −4 2 x −x −4
x=x− =
2x 2x
4
1 x2 + 4 x+
= ( )= x (similar to fixed point)
2 x
2x
quadratic converging
from a pt, initial guess (close to 0)
in 5 iterations, accuracy = 1 x 10−8
+1 = 1 x 10−16
quadratic
bisection - linear (error is proportional to step size)
regula falsi
Newton - no interval (only requires 1 guess)
linear - if far from the correct point combine Bisection with Newton
guaranteed to find a root
Is Brent’s method comparable with Newton?
problem with Newton = f ' (x)
if function is not simple, it is difficult to find derivative
Example: x=ln x . e x . sin 4 x
SECANT METHOD
graphical - does not show relationship of Secant to Newton’s method
-uses forward difference
-requires 2 initial estimates; f ¿ x) is not required to change signs,
therefore, this is not a bracketing method
f (x) dy
x=x− derivative is a slope ( )
f ' (x) dx
f ( x i )−f ( x i−1) y1−¿ y
replace f ' (x) with 0
¿
xi −xi −1 x1− x0
( x ¿ ¿ i−x i−1) fx i
x i+1= x i - ¿
f ( x i )−f (x i−1)
Example:
Determine the root of f ( x )=e−x −x using the Secant Method. Use starting
points X 0=0∧ X i=1
SECANT METHODS
- For derivatives that are difficult or inconvenient to evaluate, they can be
approximated by a backward finite divided difference.
~ f ( x i−1) −f ( x i ) Can be substituted to (1) to
f ( xi ) ¿ yield the iterative eqn
x i−1−x i
f ( x i ) ( x i−1−x i ) → 2 formula for the secant
x i+1=x i− method
f ( x i−1 )−f ( x i )
- The approach requires two initial estimates of x. However, because f(x) is
not required to change signs between the estimates, it is not classified as
a bracketing method.
- Rather than using 2 arbitrary values to estimate the derivative, an
alternative approach involves a fractional perturbation of the independent
variable to estimate f’(x),
~ f ( x i+ δ x i) −f (x i)
f ' (x i) ¿
δ xi
Where δ=¿ a small perturbation fraction. This approximation can
be substituted in eqn 1 to yield
δ xi f ( xi ) (modified secant method)
x i+1=x i− (3)
f ( x i +δ x i ) −f ( x i )
- Provide a nice means to
attain the efficiency of
Newton-Raphson
without having to
compute derivatives
SECANT METHOD
- closely related to regula falsi, it results from a slight modification of the
latter
- Instead of choosing the subinterval that must contain the zero, from the
next approximation from the two most recently generated points:
x 1−x o
x 2=x 1− y
y 1− y o 1
At the Kth stage, the new approximation to the zero is
x K −x K−1
x K +1=x k − y
y K − y K −1 K
1. Define function, f(x), whose zero is derived.
2. Define first two approximations, a and b (a ≠ b )
It is not required that a < b, or that there actually be a zero between
a & b; evaluate f(a) and f(b)
3. Compute new approximate solution
f (b)(b−a)
S=b−
f ( b )−f (a)
Evaluate f(s)
If f(s) = 0, stop
4. Record current values of a, b, s, f(a), f(b), f(s) in a table. Decide whether
to continue computations. If you need to continue, update definitions of a
& b in step 2, by setting a = b, b = s
2
f ( x )=x −3
a=1 f(a) = -2
b=2 f(b) = 1
S = 1.6667
f(s) = -0.2222
a b S f(a) f(b) f(s)
1 2 1.6667 -2 1 -0.2222
2 1.6667 1.7273 1 -0.2222 -0.0165
−4
1.6667 1.7273 1.7321 -0.75 -0.063 3.1886 x 10
−4
1.7273 1.7321 1.7321 -0.063 3.1888x10-4 3.1886 x 10
Difference between regula(needs interval) and secant(needs 2 guesses)
advantage: derivative-free
disadvantage: needs 2 guesses; x 1 , x 2
Secant - does not improve Newton’s method
- slightly slower than Newton
PEGASUS METHOD
- starting with Regula Falsi
next iteration:
if f i+1 f i <0 then X i−1 f i−1 is replaced by X i f i
if f i+1 f i >0 replace X i−1 f i−1 by :
f i−1 f i
x i−1;
f i + f i+1
Newton - quadratic
Secant - super linear
Bisection - linear
Are there methods that are cubic?
books: till quadratic only
Study: extensions to Newton’s method to improve order of convergence
ITERATIVE METHODS
GAUSS SEIDEL - most commonly used
[ x ] { x } = {b }
- to start the solution, make initial guess for x’s ; simple approach is
to assume that they are all zero
| |
j j−1
x i −x i
ε= × 100 %
xi j
j = present iteration
j-1 = previous iteration
Example: 3 x 1−0.1 x2−0.2 x 3=7.85
0.1 x 1+7 x 2−0.3 x 3=−19.3
0.3 x 1−0.2 x 2+10 x 3=71.4
first, solve each of the equations for its unknown on the diagonal:
7.85+0.1 x 2 +0.2 x 3
x 1= (1)
3
−19.3−0.1 x 1+ 0.3 x3
x 2= (2)
7
71.4−0.3 x1 +0.2 x 2
x 3= (3)
10
Set: x 2∧x 3=0 and in (1)
7.85+0.1(0)+0.2(0)
x 1= =2.616667
3
−19.3−0.1(2.616667)+ 0.3(0)
x 2= =−2.794524
7
71.4−0.3(2.616667)+0.2(−2.794524)
x 3= =7.005610
10
for the second iteration, the same process is repeated to compute:
7.85+0.1(−2.794524)+0.2 (7.005610)
x 1= =2.990557
3
−19.3−0.1 ( 2.990557 )+ 0.3 (7.005610 )
x 2= =−2.499625
7
71.4−0.3 ( 2.990557 ) +0.2 (−2.499625 )
x 3= =7.000291
10
The method is therefore conveying on the true solution; make additional iteration
For x 1:
2.990557−2.616667
ε a 1= ( 100 )=12.5 %
2.990557
ε a 2=11.8 %
ε a 3=0.076 %
As each new x value is computed, it is immediately used in the next equation to
determine another x value.
JACOBI ITERATION
- Utilizes a somewhat different tactic.
- Rather than using the latest available x’s, this technique uses the equation
to compute a new set of x’s on the basis of a set of old x’s.
- Thus, as new values are generated, they are not immediately used but
rather are retained for the next iteration.
|aii|> ∑ |aij|
j =1
j ≠1
i.e., the absolute value of the diagonal coefficient in each of the equations
must be larger than the sum of the absolute value of the other coefficients
in the equation.
Example:
Use the Gauss-Seidel method to solve the following system until the percent
relative error falls below ε s=5 % :
10 X 1 +2 X 2− X 3=27
−3 X 1−6 X 2 +2 X 3=−61.5
X 1 +2 X 2 +5 X 3 =−21.5
27−2 X 2 + X 3
X1= equation (1)
10
−61.5+ 3 X 1−2 X 3
X2= equation(2)
6
−21.5−X 1 −X 2
X3= equation (3)
5
Initially, set X 2 , X 3=0 , substitute in (1) to solve for X 1
Then substitute X 1 from (1) and X 3 =0in (2) to solve for X 2
Then substitute X 1 from (1) and X 2 from (2) in (3) to solve for X 3 .
Use computed X 1 , X 2 , X 3 in the next iteration
* By Jacobi method: 1) set X 2 , X 3=0
2) When a complete set of X 1 , X 2∧ X 3 are generated,
use for the 2 nd iteration.