0% found this document useful (0 votes)
96 views

Numerical Analysis: NA Team 2024

The document discusses several numerical methods for solving nonlinear equations and systems of nonlinear equations. For nonlinear equations, it covers the simple iteration method, bisection method, Newton's method, secant method, and false position method. For systems of nonlinear equations, it discusses the Newton-Raphson method, modified Newton method, and fixed-point iteration method. It also summarizes numerical methods for differentiation, interpolation, and integration including finite difference approximations, Lagrange interpolation, Newton divided differences, spline interpolation, trapezoidal integration rule, and Simpson's rule.

Uploaded by

Mostafa Negeda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
96 views

Numerical Analysis: NA Team 2024

The document discusses several numerical methods for solving nonlinear equations and systems of nonlinear equations. For nonlinear equations, it covers the simple iteration method, bisection method, Newton's method, secant method, and false position method. For systems of nonlinear equations, it discusses the Newton-Raphson method, modified Newton method, and fixed-point iteration method. It also summarizes numerical methods for differentiation, interpolation, and integration including finite difference approximations, Lagrange interpolation, Newton divided differences, spline interpolation, trapezoidal integration rule, and Simpson's rule.

Uploaded by

Mostafa Negeda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

‫قبل تعبني‪:‬‬

‫س ُت َر ُّ‬
‫دونَ‬ ‫م ُنىنَ َو َ‬ ‫م ْؤ ِ‬ ‫ه َوا ْن ُ‬ ‫سى ُن ُ‬ ‫م َو َر ُ‬ ‫مه َ ُ‬
‫ك ْ‬ ‫ع َ‬‫ه َ‬ ‫سيَ َري انهَّ ُ‬ ‫ف َ‬‫مهُىا َ‬ ‫م اعْ َ‬‫( َو ُق ِ‬
‫مهُىنَ )‪.‬‬‫م تَ ْع َ‬‫مب ُك ْن ُت ْ‬ ‫م بِ َ‬ ‫ك ْ‬ ‫د ِة َ‬
‫ف ُي َنبِ ّ ُئ ُ‬ ‫هب َ‬‫انش َ‬
‫َّ‬ ‫م ا ْن َ‬
‫غ ْيبِ َو‬ ‫إِنًَ عَ بنِ ِ‬

‫‪Numerical Analysis‬‬
‫‪Summary‬‬
‫‪NA Team 2024‬‬
Non Linear Equations
1) Simple Iteration Method
Steps:
1- If X0 is not given:
Find the interval (a,b) that contains the root. Assume X0.

2- Put the given equation in the form X = ɸ(X).

3- Check the condition of convergence |ɸ’(X)| < 1


, where X ∈ (a,b)

4- If the condition of convergence is satisfied, perform


iterations using Xk+1 = ɸ(Xk) until you satisfy the stopping
condition of the problem.
**If the condition of convergence is not satisfied, go to
step 2.

2) Bisection Method
Steps:
1- Choose XL, Xu such that f(XL)f(Xu) < 0.

2- Xm = . (Xm is the estimate of the root)

3- Check
 If f(XL)f(Xm) < 0
∴ XL = X L , X u = X m
 If f(XL)f(Xm) > 0
∴ XL = X m , X u = X u
 If f(XL)f(Xm) = 0
∴ f(Xm) = 0
∴ Xm = is the root.
∴ Stop.

4- Xm = . (new estimate for the root)

5- Find the absolute relative approximate error


|ϵa|= | |*100%

6- If |ϵa|< ϵs (tolerance)
∴ Stop.
(else, go to step 3)

3) Newton Method
Steps:
1- If X0 is not given, find the interval containing the root, then
assume X0.
.Bisection‫ واحدة مش اثنٌن زي ال‬initial point‫محتاج فً البداٌة ل‬

2 – Compute f’(X) symbolically.

3 - Xi+1 = Xi -

4 - |ϵa|= | |*100%
(If |ϵa|< ϵs stop, else go to step 3).
4) Secant Method
Steps:
1- Given two initial guesses X0&X1 or X-1&X0

2- Xi+1 = Xi – f(Xi)

3- |ϵa|= | |*100%
(If |ϵa|< ϵs stop, else go to step 3).

5) False-Position Method
Same steps as bisection except that we compute xr instead
of xm
Steps:
1- Choose XL, Xu such that f(XL)f(Xu) < 0.

2- Xr = Xu – f(Xu)

3- Check
 If f(XL)f(Xr) < 0
∴ XL = X L , X u = X r
 If f(XL)f(Xr) > 0
∴ XL = X r , X u = X u
 If f(XL)f(Xr) = 0
∴ f(Xr) = 0
∴ Xr = is the root.
∴ Stop.
4- Compute new Xr = Xu – f(Xu)

5 |ϵa|= | |*100%
(If |ϵa|< ϵs stop, else go to step 3).

Significant digits (m)


m ≤ 2 – log(2|ϵa| )
ex: 0.035 : m=2
4005 : m=4
4500 : m=2
4500.1: m=5

Non Linear System of Equations


1) Newton-Raphson Method
‫ هتحل‬،system ‫ و جابلك‬، Use Newton’s Method ‫لو قالك‬
Newton-Raphson Method ‫ب‬
Given f(x,y)=0, g(x,y)=0.
Required to get x, y near x0, y0.
Steps:
1- Put equations in the form f(x,y)=0 & g(x,y)=0.
2- Compute the Jacobian.

J=[ ]

3- Compute J-1 = [ ]

4- [ ] = [ ] – J-1[ ]at (xi, yi)


2) Modified Newton
Steps:
1- Put the equations in the form
f(x,y,z) = 0
g(x,y,z) = 0
h(x,y,x) = 0

2- Compute fx, gy, hz.

3- Xi+1 = Xi –

yi+1 = yi –

Note: we used xi+1 when computing yi+1.

zi+1 = zi –

Note: we used xi+1, yi+1 when computing zi+1.


3) Fixed-point Method
Given (x0, y0, z0)
Given F(x,y,z) = 0
,G(x,y,z) = 0
,H(x,y,z) = 0
Steps:
1- Put the equations in the form
x = f(x,y,z)
y = g(x,y,z)
z = h(x,y,z)
‫ و نحاول نجٌب‬،f ‫ و نسمٌها‬،‫ هنمسك المعادلة األول‬،‫فاالمتحان علشان نوصل كلنا لنفس الحل‬
،y ‫ و نحاول نجٌب منها أسهل‬،g ‫ و المعادلة الثانٌة هنسمٌها‬،x ‫ و ندور علً أسهل‬،x ‫منها‬
..........‫و هكذا‬

2- Check condition of convergence.


|fx| + |gx| + |hx| < 1
|fy| + |gy| + |hy| < 1 at (x0, y0, z0)
|fz| + |gz| + |hz| < 1
SUFFICIENT BUT NOT NECESSARY
‫ فممكن نوصل للحل و‬،‫ فاحنا ضامنٌن أننا هنوصل للحل؛ لكن لو متحققتش‬،‫ٌعنً لو اتحققت‬
.‫ممكن أل‬
3- xi+1 = f(xi,yi,zi)
yi+1 = g(xi,yi,zi)
zi+1 = h(xi,yi,zi)

Numerical Differentiation
Forward Difference (F.D) Approximation
f’(xi) =
Δ
f’’(xi) =
(Δ )
Backward Difference (B.D) Approximation
f’(xi) =
Δ
Central Difference (C.D) Approximation
f’(xi) =
Δ
f’’(xi) =
(Δ )
Richardson Extrapolation
If we want to compute y’ with o(Δ ) , we need 2 points,
Δ
N1 at & N1 at Δx, if given Δ
Or N1 at Δx & N1 at 2Δx, if given Δ
N1(Δx) N2 N1(Δx) N2
Δ
N1( )
N1(2 Δx) -

If we want to compute y’ with o(Δ ) , we need 3 points,


Δ Δ
N1 at , N1 at N1 and N1 at Δx, if given Δ
Or N1 at Δx & N1 at 2Δx, & N1 at 4 Δx if given Δ

N1(Δx) N1(Δx)

N2 N2
Δ
N1( ) N3 N3
N1(2 Δx)
Δ
N1( )
N2
N1(4 Δx) N2

‫ بعوض فً القانون دة‬،‫علشان أعرف هضرب فً كام‬


Interpolation
Given (x0,y0), (x1,y1), (x2,y2),….(xn,yn) (n+1) points
Find the value of y of a value of x that is not given
1) Direct Method
2) Lagrange
3) Newton Divided Difference
4) Spline Method

1) Direct Method
Linear Interpolation (first order)
y = a 0 + a 1x
(substitute by 2 points to get a0, a1)
Quadratic Interpolation (second order)
y = a0 + a1x + a2x2
(substitute by 3 points to get a0, a1, a2)
Cubic Interpolation (third order)
y = a0 + a1x + a2x2 + a3x3
(substitute by 4 points to get a0, a1, a2, a3)
2) Lagrange
First Order (Linear)
x0 x1
y= f(x0) + f(x1)
f(x0) f(x1)
Second Order (Quadratic)
y= f(x0) + f(x1)+ f(x2)
x0 x1 x2
f(x0) f(x1) f(x2)
‫ و‬،‫ات اللً عند العوامٌد اللً مش متخبٌة‬x‫ ال‬x minus‫ و نعمل ال‬،‫بنخبً العمود األول‬
‫ و بعدٌن نضرب الكالم‬،‫ اللً كانت متخبٌة‬x‫ نحط ال‬x‫بعدٌن نقسم الكالم دة تحت بس بدل ال‬
.‫ و بعدٌن نعٌد الكالم دة مع باقً العوامٌد‬،‫ اللً كانت متخبٌة‬x‫ بتاعة ال‬y‫دة كله فً ال‬
Third Order (Cubic): Find it in the same manner
3) Newton Divided Difference
First Order (Linear)
x0 y 0
x1 y 1
∴ f1(x) = b0 + b1(x-x0)
Second Order (Quadratic)
x0 y 0

x1 y 1
x2 y 2

∴ f2(x) = b0 + b1(x-x0) + b2(x-x0)(x-x1)


Third Order (Cubic): Find it in the same manner

4) Spline Interpolation
Linear Interpolation
x0 x1 x2 … xn
f(x0) f(x1) f(x2) f(xn)
(n+1) points
f(x) = f(x0) + (x-x0) x0 ≤ x ≤ x1

= f(x1) + (x-x1) x1 ≤ x ≤ x2

= f(xn-1) + (x-xn-1) xn-1 ≤ x ≤ xn


Quadratic Interpolation
f(x) = a1x2 + b1x + c1 x0 ≤ x ≤ x1
= a2x2 + b2x + c2 x1 ≤ x ≤ x2
= anx2 + bnx + cn xn-1 ≤ x ≤ xn

1- Substitute in each quadratic equation with points in the


interval.
2- At each interior point, slope of the 2 quadratic
polynomials should be equal.
3- Assume a1 = 0. (Assume first spline to be linear)

Numerical Integration
1- Trapezoidal Method
2- Simpson’s Method
3- Romberg’s Method
4- Gauss Quadrature Method

1- Trapezoidal Rule
Multiple-Segment Trapezoidal Rule

I = [f(a)+2∑ +f(b)]
h=
,where h: width of each segment.
a: the beginning of the integration.
b: the end of the integration.
Approxiamte Error (ET) = ’’ , ̅̅̅̅̅̅
’’ = , a ≤x ≤b

‫خلي بالك‬
‫ قانون‬:‫'' بالقانون اللى فوق‬f‫ بتستخدم ال‬approx error‫ لو طالب منك تحسب ال‬
‫)؛‬a ‫ و‬b ‫القٌمة المتوسطة للتفاضل الثانى ( فرق التفاضل االول على فرق‬
‫لكن لو طالب منك تحسب عدد ال‪ segements‬اللى هو ال‪ n‬بحٌث ال ٌزٌد‬
‫ال‪ error‬عن رقم معٌن‪ٌ ،‬بقى هنحسب ال ‪ max value‬للتفاضل الثانى (مش‬
‫هنستخدم قانون القٌمة المتوسطة)؛ ألنك عاٌز ال‪ error‬فى اقصى حالتها ال تصل الى‬
‫الرقم المعٌن المحدد فى المسألة‪.‬‬
‫‪ ‬أن ال‪ ET‬دة ال‪ symbol‬بتاع حاجتٌن عندنا فً ال‪ ،course‬ال‪ ،True Error‬و‬
‫ال‪Approximate Error‬؛ فهو المفروض هٌوضح فً السؤال هو عاوز اٌه‪.‬‬

‫‪2- Simpson’s Rule‬‬


‫‪Multipe-Segment Simpson’s Rule‬‬
‫الزم ال‪ n‬هنا تكون رقم ‪even‬‬
‫∑‪I ≈ [f(x0)+4‬‬ ‫∑‪+2‬‬ ‫])‪+f(xn‬‬
‫‪a‬‬ ‫‪n odd‬‬ ‫‪n even‬‬ ‫‪b‬‬
‫= ‪,h‬‬

‫= )‪Approximate Error (ET‬‬ ‫≈ ̅̅̅̅̅̅̅̅̅ ‪(x) ,‬‬

‫‪3- Romberg’s Rule‬‬


‫‪Trapezoidal + Richardson‬‬

‫‪4- Gauss-Quadrature Rule‬‬

‫‪N- Point Gauss Quadrature Formulas‬‬


‫∫=‪I‬‬ ‫)‪≈ c1f(x1) + c2f(x2) +……cnf(xn‬‬
‫يعىي مثال لو هىأخذ ‪ 3 Points‬جوة ‪a,b‬‬
‫الزم ٌدٌلك فً المسألة‬
‫)‪x1, x2, x3 (called arguments‬‬
‫)‪c1, c2, c2 (called weighting factors‬‬
‫أنت دورك فً المسألة‪ ،‬أنك تحول ال‪ ،integration‬دة بدل ما هو من ‪ a‬لحد ‪ ،b‬تخلٌه‬
‫من ‪ -1‬لحد ‪1‬‬
‫هتحوله ازاي؟ عن طرٌق ال‪ Transformation‬اللً المفروض تبقً حافظه‪.‬‬
‫=‪x‬‬ ‫‪t+‬‬

‫= ‪dx‬‬ ‫‪dt‬‬

‫‪Multiple Integrals‬‬
‫هنا هٌقولك تحسب ال‪ double integration‬باستخدام أي طرٌقة من طرق‬
‫ال‪ integration‬اللً درسناها‪.‬‬
‫في فكرتين للمسائل‪:‬‬
‫الفكرة األولً أنه هٌدٌلك تكامل‪ ،‬و ٌدٌلك جدول فٌه قٌم ال‪x‬ات‪ ،‬و ال‪y‬ات‪ ،‬و قٌم التكامل عند‬
‫كل ‪ x‬و ‪ y‬اللً هو )‪ ،f(x,y‬ففً الحالة دي‪ ،‬ممكن تعتبر ال‪ y‬ثابتة‪ ،‬و تعمل جدول فٌه قٌم‬
‫ال‪x‬ات‪ ،‬و قٌم ال)‪ f(x,y‬و تستخدم القانون بتاع ال‪ integration‬عند ال‪ y‬دي‪ ،‬فأنا كدة‬
‫عملت ال‪ integration‬بتاع ال‪ ،x‬و بعد كدة هعمل ال‪ integration‬بتاع ال‪ ،y‬طب ازاي؟‬
‫هحط قٌم التكامل اللً طلعتها فً جدول‪ ،‬مع قٌم ال‪y‬ات اللً كنت فارضها ثابتة‪ ،‬و أستخدم‬
‫قانون ال‪ integration‬تانً‪ ،‬و اللً هٌطلعلً دة ٌبقً هو دة القٌمة اللً هو عاوزها‪.‬‬

‫ممكن أنه ٌدٌلك ‪ ،function‬و ٌدٌلك ال‪ ،h‬ففً الحالة دي هتحسب ال ‪double‬‬
‫‪ integration‬زي الترم اللً فات و هتشتغل زي الحالة اللً فوق‪ ،‬بس فً بعض المسائل‬
‫فٌها فكرة ثانٌة و هً أنً ممكن أخلً ال‪ double integration‬دة ‪ integration‬واحد و‬
‫أستخدم القانون عادي‪ ،‬طب أنا بعرف ازاي؟ لو لقٌت ال‪ function‬و الحدود بتاعة ال‪ x‬و‬
‫ال‪ y‬زي بعض‪.‬‬

‫)‪Regression (Curve Fitting‬‬

‫‪Least Squares Method‬‬


‫‪Given some readings and it is required to get the equation‬‬
‫‪of the curve in the form:‬‬
‫)‪F(x) = a0ɸ (x) + a1ɸ (x)+ a2ɸ (x‬‬
∑ ɸ ∑ ɸ ɸ ∑ ɸ ɸ ∑ ɸ
∑ ɸ ɸ ∑ ɸ ∑ ɸ ɸ [ ] = [∑ ɸ ]
[∑ ɸ ɸ ∑ ɸ ɸ ∑ ɸ ] ∑ ɸ
an،.....،a1 ،a0 ‫و نحل المعادالت و نجيب‬

Special Cases
Polynomials
F(x) = a0 + a1x + a2x2 +……+ anxn

∑ ∑ ∑ ∑ ∑
∑ ∑
[ ∑ ∑ ∑ ][ ]=[ ]
∑ ∑ ∑ ∑ ∑
an....،a1 ،a0 ‫و نحل المعادالت و نجيب‬
Straight Lines
F(x) = a0 + a1x
∑ ∑
[ ][ ]=[ ]
∑ ∑ ∑
a1 ،a0 ‫و نحل المعادالت و نجيب‬
m is the number of points
Parabolic Equations
F(x) = a0 + a1x + a2x2
∑ ∑ ∑
[∑ ∑ ∑ ][ ]=[∑ ]
∑ ∑ ∑ ∑
a2 ،a1 ،a0 ‫و نحل المعادالت و نجيب‬
 The sum of the squares of errors (deviation)
= S2 = ∑ =∑

 If F(x) is with 2 unknowns, we can transform it to the


straight line form.

 Root Mean Square Error (R.M.S) Error = √

Numerical (Iterative) Methods to Solve Linear


System of Equations

Jacobi Method & Gauss-Seidel Method


Solving AX=b,
we start by x(0) and stop after a certain number of iterations
or when ||xk+1 – xk|| < specific number.

Ex: Solve and get the first three iterations of Gauss-Jacobi


and Gauss Siedel. Start with x(0) = ̅
Given: 2x3+3x4=-2, 4x1-2x2=0, -2x1+5x2-x3=2, -x2+4x3+2x4=3

Solution
‫ ٌكون أكبر‬،diagonal‫ بتاع الرقم اللً فً ال‬absolute‫هنرتب كل معادلة بحٌث أن ال‬
.‫ و نرتبهم‬،‫ ٌبقً فً معادالت مكانها مش صح‬،‫رقم؛ و لو دة متحققش‬

4x1-2x2 = 0, |4|> |-2|+ 0 + 0


-2x1+5x2-x3 = 2, |5|> |-2|+ |-1| + 0
-x2+4x3+2x4 = 3, |4|> 0+ |-1| + |2|
2x3+3x4 = -2, |4|> 0+ 0 + |2|

ً‫ فً المعادلة الثانٌة ف‬x2 ‫ و هنجٌب‬،‫ فً المعادلة األولً فً ناحٌة لوحدها‬x1 ‫بعدٌن هنجٌب‬
‫ فً المعادلة‬x4 ‫ و هنجٌب‬،‫ فً المعادلة الثالثة فً ناحٌة لوحدها‬x3 ‫ و هنجٌب‬،‫ناحٌة لوحدها‬
.‫الرابعة فً ناحٌة لوحدها‬

Using Jacobi Using Siedel

= [2 ] = [2 ]
= [2 + +2] = [2 + +2]
= [3+ -2 ] = [3+ -2 ]
= [-2-2 ] = [-2-2 ]

،ً‫ و نقف زي ما هو قاٌلنا امت‬،‫ و نعوض بأرقام‬،‫بعدٌن نعمل جدول‬


‫ بتعوض‬Gauss-Siedel ‫و هتالحظ أن الفرق ما بٌن الطرٌقتٌن أن فً الطرٌقة بتاعة‬
.‫ و استخدم الجدٌدة‬،‫ انسً القدٌمة‬،‫ جدٌدة‬x ‫ أول ما بتجٌب‬،‫ات الجدٌدة علً طول‬x‫بال‬

Power Method

Used To get the numerical solution to the eigenvalue


problem A ̅ = λ ̅

To get the largest λ and the corresponding ̅


Given A, and an initial value z(0) for the eigenvector
corresponding to λmax.
1- w(1) = Az(0)
2- z(1) =
absolute‫ و بنعوض به ك‬،absolute‫احنا هنا بنختار أكبر رقم ك‬

3- λ(1) =
‫؛ و لكن بنعوض به بإشارته األصلٌة‬absolute‫احنا بنختار أكبر رقم ك‬

Repeat the above steps until a certain required condition is


satisfied.
Conditions:
 Number of iterations.
 | λ(k+1) - λ(k)| < certain number
 |Az(k) - λ(k)z(k)| < certain number
To get the smallest λ and the corresponding ̅
Given A, and an initial value z(0) for the eigenvector
corresponding to λmax.
 Follow the same steps mentioned above but use
A-1 instead of A.
 At the end, after the required condition is satisfied, you
obtain λmax of A-1, but we want λmin of A.
∴ λmin of A =
Z(k) obtained is the eigenvector required
(corresponding to λmin of A).

Ordinary Differential Equations (O.D.E)


1) Taylor Method
2) Euler Method
3) Modified Euler Method

1) Taylor Method
Given: y’=f(x,y), y0=y(x0), Find: y(x1), y(x2),…...
y = y0 + (x-x0)y0’ + y0’’ + y0’’’ + ….
2) Euler Method
y ≈ y0 + hf(x0,y0) ,h = x-x0

3) Modified Euler Method


،1st approximation‫ بس هنعتبر أن اللً طلعلنا دة ال‬، Euler Method ‫هنعمل‬
:‫ فهنستخدم القانون دة‬،y‫و هنحاول نحسن قٌمة ال‬
= y0 + [ ]
‫ و بعذيه‬،Euler Method ‫ بىعمل‬،Modified Euler Method‫يعىي احىا في ال‬
.‫بىستخذم القاوون دة‬

You might also like