0% found this document useful (0 votes)
15 views

Numerical Method Lab Quiz

note

Uploaded by

siambasher0009
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Numerical Method Lab Quiz

note

Uploaded by

siambasher0009
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Numerical Method Lab Quiz

Linear Equation Solving Method

The Jacobi iterative method and the Gauss-Seidel iterative method


are numerical techniques used to solve systems of linear
equations iteratively. Both are applicable to systems of equations
where the coefficient matrix is diagonally dominant.
Jacobi Iterative Method
1. Description:
o This method assumes the value of each variable to
remain unchanged throughout an iteration.
o For each equation, the current value of a variable is
updated using the values of other variables from the
previous iteration.
2. Key Characteristics:
o Each iteration uses values from the previous iteration.
o Suitable for parallel computation since updates do not
depend on each other within the same iteration.

Gauss-Seidel Iterative Method


1. Description:
o This method updates each variable sequentially during
an iteration and immediately uses the latest computed
values.
2. Key Characteristics:
o Each iteration uses the most recent values computed
during the same iteration.
o Sequential updates make it less parallelizable
compared to Jacobi.
Which Method is Better?
The choice between Jacobi and Gauss-Seidel depends on the
situation:
1. When to Use Jacobi:
o Parallel computing environments, where simultaneous
updates for all variables are possible.
o Systems where simplicity in implementation is more
critical than speed.
2. When to Use Gauss-Seidel:
o Sequential processing environments, where
convergence speed matters more.
o Systems that benefit from faster convergence and
where the matrix is well-suited (e.g., diagonally
dominant).
3. General Observations:
o Gauss-Seidel typically outperforms Jacobi in terms of
convergence speed, especially for diagonally dominant
matrices.
o If the coefficient matrix is sparse or large-scale, Jacobi's
parallelizability might make it preferable.
o For well-conditioned systems, Gauss-Seidel may
converge in fewer iterations.
Non-Linear Equation Solving Method:

Bracketing Method:
The bracketing method is a class of numerical methods for finding
the root of a function f(x), where f(x)=0. It works by starting with
two initial guesses, a and b, such that f(a) and f(b) have opposite
signs (f(a)⋅f(b)<0), ensuring that there is at least one root in the
interval [a, b]. The interval is iteratively reduced until it converges
to the root.

Bisection Method:
(binary chopping or half interval method)
1. Description:
• A simple bracketing method that divides the interval [a, b]
into two equal parts.
• The midpoint c is calculated as: c = (a + b)/2
• Depending on the sign of f(c) the root lies in either [a, c] or [c,
b] and the interval is updated accordingly.
• Iteration continues until the interval width or ∣f(c)∣ is less
than a specified tolerance.
2. Key Characteristics:
• Always converges to a root, provided f(a)⋅f(b)<0.
• Convergence is guaranteed but relatively slow, with an order
of convergence of 1 (linear).
False Position (Regula Falsi) Method:
1. Description:
• Similar to the bisection method but uses a weighted
average to choose the next point based on the function
values.
• The new point c is calculated as:

• Depending on the sign of f(c), the interval is updated as


[a, c] or [c, b].
2. Key Characteristics:
• Converges faster than the bisection method in many
cases.
• The interval is not halved but adjusted dynamically
based on the function's behavior.

Which Method is Better?


1. When to Use Bisection Method:
o When reliability is more important than speed.
o For functions that are poorly behaved or have flat
regions near the root.
o When a guaranteed convergence is essential.
2. When to Use False Position Method:
o When speed is a priority, and the function has a steep
slope near the root.
o For well-behaved functions where the endpoints a and b
are not near zero or flat regions.
o When fewer iterations are desirable.
3. General Observations:
o Bisection is more robust but slower and less efficient.
o False Position is faster but can stagnate near flat
regions.

Opening Method:
An opening method is a class of numerical techniques for finding
the root of a function f(x), where f(x)=0. Unlike bracketing
methods, which require an initial interval enclosing the root,
opening methods start with one or two initial guesses and do not
necessarily require f(a)⋅f(b)<0. These methods iterate based on
approximations to reduce the error, often converging faster than
bracketing methods but at the cost of less reliability.

Secant Method:
1. Description:
• The secant method uses two initial guesses, x1 and x2,
and approximates the function f(x) by a secant line (a
straight line passing through two points on the function
curve).
• The formula for the next approximation:

• This method replaces the derivative required in Newton-


Raphson with a finite-difference approximation.
2. Key Characteristics:
• Requires two initial guesses.
• Faster convergence than bracketing methods but
slower than Newton-Raphson in general.
• Order of convergence: approximately 1.618 (super
linear).

Newton-Raphson Method:
1. Description:
• The Newton-Raphson method uses one initial guess x0
and iteratively approximates the root by using the
tangent line of the function at each point.
• The formula for the next approximation:

• Relies on the derivative f′(x) of the function.


2. Key Characteristics:
• Requires only one initial guess.
• Very fast convergence when f′(x) exists and x0 is close
to the root.
• Order of convergence: 2 (quadratic), meaning the error
decreases very quickly.

Which Method is Better?


1. When to Use Secant Method:
• When derivatives are difficult or expensive to compute.
• When a derivative is not defined or discontinuous.
• When reliability is more critical than speed.
• Suitable for nonlinear functions with unknown or
complicated derivatives.
2. When to Use Newton-Raphson Method:
• When derivatives are easy to compute and well-
behaved.
• When a good initial guess is available, and rapid
convergence is desired.
• For applications requiring high precision in fewer
iterations.
3. General Observations:
• Newton-Raphson is faster but requires f′(x) and may fail
if the initial guess is far from the root or if f′(x) is zero
near the root.
• Secant is slower but more robust in cases where
derivatives are problematic.
Diagonally Dominant:
A square matrix A=[aij] of size n×n is diagonally dominant if, for
each row i (where i=1,2,…,n), the absolute value of the diagonal
element ∣aii| is greater than or equal to the sum of the absolute
values of all the other elements in that row.

Measurement of Errors:
Gauss Elimination Method:
Gaussian elimination is a method for solving a system of linear
equations, finding the rank of a matrix, or computing the inverse
of a matrix. It transforms a given matrix into an upper triangular
form using row operations.
Gauss-Jordan Elimination:
Gauss-Jordan elimination is an extension of Gaussian elimination
that transforms the matrix into a reduced row echelon form
(RREF), where the matrix becomes diagonal with all diagonal
elements equal to 1.

• Gaussian Elimination helps to put a matrix in row echelon


form, while Gauss-Jordan Elimination puts a matrix in
reduced row echelon form. For small systems (or by hand), it
is usually more convenient to use Gauss-Jordan elimination
and explicitly solve for each variable represented in the
matrix system. However, Gaussian elimination in itself is
occasionally computationally more efficient for computers.
Also, Gaussian elimination is all you need to determine the
rank of a matrix (an important property of each matrix) while
going through the trouble to put a matrix in reduced row
echelon form is not worth it to only solve for the matrix's
rank.
LU Factorization:

LU decomposition of a matrix is the factorization of a given square


matrix into two triangular matrices, one upper triangular matrix
and one lower triangular matrix, such that the product of these
two matrices gives the original matrix.

AX=B
Here, A=LU
LUX=B
Let, UX=Y
LY=B
This method can be solved in O(1) time complexity:
Ordinary Differential Equations:
An Ordinary Differential Equation (ODE) is a type of equation that
contains one independent variable and one dependent variable
and at least one of its derivatives w.r.t. independent variable.
Runge-Kutta (RK) Method:
The RK method is a family of iterative methods used To
approximate the solutions of ODEs. The 4th-order Runge-Kutta
method (RK4) is the most commonly used and provides a good
balance between accuracy and computational efficiency.
This is the graph of (dy/dx = 2x+1)
Newton Forward and Backward Interpolation:
Interpolation is the technique of estimating the value of a function
for any intermediate value of the independent variable, while the
process of computing the value of the function outside the given
range is called extrapolation.

This formula is particularly useful for interpolating the values of


f(x) near the beginning of the set of values given. h is called the
interval of difference. And x0 is the first value given.

This formula is useful when the value of f(x) is required near the
end of the table.

Time Complexity: O(n^2) since there are two nested loops to fill
the forward difference table and an additional loop to calculate
the interpolated value.
Space Complexity: O(n^2), as the forward and backward difference
table is stored in a two-dimensional array with n rows and n
columns.
Lagrange Interpolation:
Lagrange Interpolation Formula finds a polynomial called
Lagrange Polynomial that takes on certain values at an arbitrary
point. It is an nth-degree polynomial expression of the function
f(x). The interpolation method is used to find the new data points
within the range of a discrete set of known data points.

Properties of Lagrange Interpolation:


• This formula is used to find the value of the function at any
point even when the function itself is not given.
• It is used even if the points given are not evenly spaced.
• It gives the value of the dependent variable for any
independent variable belong to any function and thus is used
in Numerical Analysis for finding the values of the function.
Uses of Lagrange Interpolation:
• It is used to find the value of the dependent variable at any
particular independent variable even if the function itself is
not given.
• It is used in image scaling.
• It is used in AI modeling.
• It is used to teach NLPs, etc.

Numerical Integration:
Numerical integration is a mathematical technique used to
approximate the definite integral of a function over a specified
interval. The definite integral of a function represents the area
under its graph, and it provides information about the total change
in the dependent variable over a given interval. The importance of
numerical integration lies in its ability to estimate the values of
definite integrals for functions that cannot be expressed in closed-
form. This makes it a powerful tool for solving real-world problems
that involve continuous change, such as the calculation of area,
volume, work, and the analysis of physical systems.
Curve Fitting:
Curve fitting is the process of constructing a curve (or a
mathematical function) that best fits a set of data points. It is
widely used in data analysis, engineering, and science to model
relationships, predict outcomes, and interpret patterns.

You might also like