0% found this document useful (0 votes)
5 views

BCA S2 MATHS U3

The document covers various methods in applied mathematics for solving linear simultaneous equations, including Gaussian elimination, LU decomposition, and iterative methods like Gauss-Seidel and Gauss-Jacobi. It also discusses the Gauss-Jordan method for finding matrix inverses, as well as numerical differentiation and integration techniques such as the Trapezoidal Rule and Simpson's 1/3 Rule. Each method is explained with formulations, examples, and comparisons of accuracy and efficiency.

Uploaded by

SPIRITIS LIVE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

BCA S2 MATHS U3

The document covers various methods in applied mathematics for solving linear simultaneous equations, including Gaussian elimination, LU decomposition, and iterative methods like Gauss-Seidel and Gauss-Jacobi. It also discusses the Gauss-Jordan method for finding matrix inverses, as well as numerical differentiation and integration techniques such as the Trapezoidal Rule and Simpson's 1/3 Rule. Each method is explained with formulations, examples, and comparisons of accuracy and efficiency.

Uploaded by

SPIRITIS LIVE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

APPLIED

MATHEMATICS
SEMESTER 2
UNIT 3

HI COLLEGE
SYLLABUS

HiCollege Click Here For More Notes 01


GAUSSIAN ELIMINATION METHOD WITH AND
WITHOUT ROW INTERCHANGE
Linear simultaneous equations are a set of two or more linear equations that
contain two or more variables. These equations are said to be simultaneous
because they are all true at the same time. For example:
2x + 3y = 7 x - 2y = -3
Here, we have two equations with two variables, x and y. The goal is to find the
values of x and y that satisfy both equations.
Gaussian Elimination Method
Gaussian Elimination is a method to solve linear simultaneous equations. It's a
systematic process that transforms the given matrix into upper triangular form,
making it easy to solve for the variables.
Step 1: Write the augmented matrix
Write the coefficients of the variables and the constants in a matrix form. The
matrix is called an augmented matrix.
| 2 3 | 7 | | 1 -2 | -3 |
Step 2: Perform Gaussian Elimination
We'll perform row operations to transform the matrix into upper triangular form.
The goal is to make all the elements below the diagonal zero.
Step 2.1: Swap rows (optional)
If necessary, we can swap rows to make the maximum value in the first column
(the coefficient of the first variable) positive.
For example, if we have:
| 2 -3 | 4 | | -1 2 | 3 |
We can swap rows to get:
| -1 2 | 3 | | 2 -3 | 4 |
Step 2.2: Eliminate elements below the diagonal
Now, we'll eliminate elements below the diagonal by subtracting suitable
multiples of one row from another.
For example, in our initial matrix:
| 2 3 | 7 | | 1 -2 | -3 |

HiCollege Click Here For More Notes 01


Step 3: Solve for variables
Now that we have an upper triangular matrix, we can solve for the variables by
back-substitution.
Example: Solving for x and y
From our initial matrix:
|2||||||||
We can solve for x and y:
x = (7 - (3/2)y) / (2) y = (-3 + (1/2)x) / (-2)
Simplify and solve for x and y to get:
x = -1 y = -2
Voilà! We have solved our linear simultaneous equations using Gaussian
Elimination Method with row interchange.
Without Row Interchange
When we don't need to swap rows, we can still perform Gaussian Elimination
without row interchange. This method is simpler and more efficient.

HiCollege Click Here For More Notes 02


LU DECOMPOSITION

LU Decomposition, also known as LU Factorization, is a method to decompose a


matrix into the product of two matrices: a lower triangular matrix (L) and an
upper triangular matrix (U). This decomposition is useful for solving systems of
linear equations and other applications.
Formulation
Given a square matrix A, we can write it as:
A=L×U
where L is a lower triangular matrix and U is an upper triangular matrix. The
elements of L and U are determined by the following conditions:
1. L is a lower triangular matrix with ones on the diagonal.
2. U is an upper triangular matrix with ones on the diagonal.
3. The product of L and U is equal to the original matrix A.
Example

HiCollege Click Here For More Notes 03


GAUSS - JACOBI AND GAUSS-SEIDEL
METHOD
Gauss-Seidel Method
The Gauss-Seidel method is a numerical method for solving systems of linear equations.
It's an iterative method that uses the values of previously computed elements to update
the current iteration.
Formulation
Given a system of linear equations:
Ax = b
where A is a matrix, x is the solution vector, and b is the constant vector, the Gauss-
Seidel method iteratively updates the solution vector as follows:
1. Initialize x(0) = 0 (or any arbitrary value)
2. For each iteration k, compute:
x(k+1) = x(k) + (b - Ax(k)) / A
where x(k) is the current estimate of the solution vector, and A is the coefficient matrix.
Gauss-Jacobi Method
The Gauss-Jacobi method is similar to the Gauss-Seidel method, but it uses only the
previous iteration's values to update the current iteration. The Gauss-Jacobi method is
less accurate than the Gauss-Seidel method but is easier to implement.
Formulation
Given a system of linear equations:
Ax = b
where A is a matrix, x is the solution vector, and b is the constant vector, the Gauss-
Jacobi method iteratively updates the solution vector as follows:
1. Initialize x(0) = 0 (or any arbitrary value)
2. For each iteration k, compute:
x(k+1) = (b - Ax(k)) / A
Key differences between Gauss-Seidel and Gauss-Jacobi methods
1. Use of previous iteration values: Gauss-Seidel method uses both previous and
current iteration values to update the solution, while Gauss-Jacobi method uses only
previous iteration values.
2. Accuracy: Gauss-Seidel method is generally more accurate than Gauss-Jacobi
method due to its use of current iteration values.
3. Convergence: Both methods converge to the same solution, but Gauss-Seidel
method may converge faster due to its use of current iteration values.

HiCollege Click Here For More Notes 04


Suppose we have a system of linear equations:

|2 1 ||3|
| 1 -1 | | 2 |
| -1 2 | | 1 |

We can use either the Gauss-Seidel or Gauss-Jacobi method to solve this system.

Using the Gauss-Seidel method:

x(0) = [0, 0, 0]
x(1) = [3/2, 2/2, -1/2]
x(2) = [5/4, 7/4, -3/4]
x(3) = [23/16, 31/16, -13/16]

Using the Gauss-Jacobi method:

x(0) = [0, 0, 0]
x(1) = [3/2, 2/2, -1/2]
x(2) = [5/4, 7/4, -3/4]
x(3) = [23/16, 31/16, -13/16]

As we can see, both methods converge to the same solution: x = [23/16, 31/16,
-13/16].

HiCollege Click Here For More Notes 05


GAUSS - JORDAN METHOD AND TO FIND
INVERSE OF A MATRIX BY THIS METHOD
The Gauss-Jordan method is a numerical method used to solve systems of linear
equations and to find the inverse of a matrix. It is an extension of the Gaussian
elimination method, which is used to solve systems of linear equations.

Formulation

Given a system of linear equations:

Ax = b

where A is a matrix, x is the solution vector, and b is the constant vector, the
Gauss-Jordan method performs the following steps:

Write the augmented matrix [A | b] in row echelon form.


Use row operations to transform the augmented matrix into reduced row echelon
form.
If the reduced row echelon form is:
| 1 0 0 | | x1 |
| 0 1 0 | | x2 |
| 0 0 1 | | x3 |

then the system has a unique solution, and x = [x1, x2, x3].

Finding Inverse of a Matrix using Gauss-Jordan Method

To find the inverse of a matrix A using the Gauss-Jordan method, we can perform
the following steps:

HiCollege Click Here For More Notes 06


Write the augmented matrix [A | I] in row echelon form, where I is the identity
matrix.
Use row operations to transform the augmented matrix into reduced row
echelon form.
If the reduced row echelon form is:
| 1 0 0 | | a11 |
| 0 1 0 | | a12 |
| 0 0 1 | | a13 |

...and so on,

then the inverse of A is:

A^(-1) = [a11 a12 ...]


[a21 a22 ...]
[a31 a32 ...]
Example

Suppose we want to find the inverse of the matrix:

A=|23|
|45|

Using the Gauss-Jordan method, we perform the following steps:

Write the augmented matrix [A | I] in row echelon form:


|2310|
|4501|

Perform row operations to transform the augmented matrix into reduced row
echelon form:
| 1 -3/2 -1/2 |
| 0 1/2 -1/2 |

The inverse of A is:

A^(-1) = |-3/2|
| /2 |
|/2 |
HiCollege Click Here For More Notes 07
NUMERICAL DIFFERENTIATION
Numerical Differentiation

Numerical differentiation is a technique used to approximate the derivative of a


function at a given point. There are several methods for numerical differentiation,
including:

1. Forward Difference Formula: This method approximates the derivative of a


function at a point using the values of the function at that point and its nearby
neighbors.
2. Central Difference Formula: This method approximates the derivative of a
function at a point using the values of the function at that point and its two
nearby neighbors.
3. Backward Difference Formula: This method approximates the derivative of a
function at a point using the values of the function at that point and its nearby
neighbors, but in reverse order.

First Order Derivatives

The first order derivative of a function f(x) at a point x=a is denoted as f'(a) and
represents the rate of change of the function at that point. The first order
derivative can be approximated using the following formulas:
1. Forward Difference Formula: f'(a) ≈ (f(a+h) - f(a)) / h
where h is a small positive value.
1. Central Difference Formula: f'(a) ≈ (f(a+h) - f(a-h)) / (2h)
2. Backward Difference Formula: f'(a) ≈ (f(a) - f(a-h)) / h

Second Order Derivatives

The second order derivative of a function f(x) at a point x=a is denoted as f''(a) and
represents the rate of change of the rate of change of the function at that point.
The second order derivative can be approximated using the following formulas:

1. Forward Difference Formula: f''(a) ≈ (f(a+2h) - 2f(a+h) + f(a)) / h^2


2. Central Difference Formula: f''(a) ≈ (f(a+2h) - 2f(a+h) + f(a-2h)) / (4h^2)
3. Backward Difference Formula: f''(a) ≈ (f(a+2h) - 2f(a+h) + f(a)) / h^2

HiCollege Click Here For More Notes 08


TABULAR POINTS
A tabular point is a point where the function values are known exactly. In this
case, we can use the exact values of the function to calculate the derivative.
For example, if we know the values of the function f(x) at x=a, x=a+h, and x=a+2h,
we can use the central difference formula to calculate the first order derivative:
f'(a) ≈ (f(a+h) - f(a)) / h

NON-TABULAR POINTS
A non-tabular point is a point where the function values are not known exactly. In
this case, we can use interpolation or extrapolation techniques to estimate the
function values.
For example, if we know the values of the function f(x) at x=a and x=a+h, we can
use linear interpolation to estimate the value of the function at x=a+2h:
f(a+2h) ≈ 2f(a+h) - f(a)
We can then use this estimated value to calculate the first order derivative:
f'(a) ≈ (f(a+h) - f(a)) / h

NUMERICAL INTEGRATION
Numerical integration is a method used to approximate the value of a definite
integral. There are several methods for numerical integration, including:

Trapezoidal Rule: This method approximates the value of a definite integral by


dividing the area under the curve into trapezoids and summing the areas of
the trapezoids.

Simpson's 1/3 Rule: This method approximates the value of a definite integral
by dividing the area under the curve into parabolic segments and summing
the areas of the segments.

HiCollege Click Here For More Notes 09


TRAPEZOIDAL RULE
The Trapezoidal Rule is given by:
∫[a,b] f(x) dx ≈ (b-a) / 2 * (f(a) + f(b) + 2 * Σ f(x_i))
where x_i = a + i * (b-a) / n, i = 1, 2, ..., n-1, and n is the number of subintervals.
Error in Trapezoidal Rule
The error in the Trapezoidal Rule is O(h^2), where h is the width of each
subinterval.

SIMPSON'S 1/3 RULE


Simpson's 1/3 Rule is given by:
∫[a,b] f(x) dx ≈ (b-a) / 3 * (f(a) + f(b) + 4 * Σ f(x_i) + 2 * Σ f(x_j))
where x_i = a + i * (b-a) / n, i = 0, 1, 2, ..., n, and x_j = a + j * (b-a) / n, j = 1, 2, ..., n-1, and n
is the number of subintervals.
Error in Simpson's 1/3 Rule
The error in Simpson's 1/3 Rule is O(h^4), where h is the width of each subinterval.
COMPARISON
The Trapezoidal Rule is simpler to implement than Simpson's 1/3 Rule, but it has a
higher error. Simpson's 1/3 Rule is more accurate, but it requires more function
evaluations. The choice between the two methods depends on the specific
problem and the desired level of accuracy.
Example
Suppose we want to approximate the value of the integral:
∫[0,2] x^2 dx
We can use the Trapezoidal Rule with n = 4 subintervals:
x_0 = 0, x_1 = 0.5, x_2 = 1, x_3 = 1.5, x_4 = 2
f(x_0) = 0, f(x_1) = 0.25, f(x_2) = 1, f(x_3) = 1.75, f(x_4) = 4
Using the Trapezoidal Rule, we get:
∫[0,2] x^2 dx ≈ (2-0) / 2 * (0 + 4 + 2 * (0.25 + 1 + 1.75)) ≈ 2.25

Using Simpson's 1/3 Rule with n = 4 subintervals:


x_0 = 0, x_1 = 0.5, x_2 = 1, x_3 = 1.5, x_4 = 2
f(x_0) = 0, f(x_1) = 0.25, f(x_2) = 1, f(x_3) = 1.75, f(x_4) = 4
Using Simpson's 1/3 Rule, we get:
∫[0,2] x^2 dx ≈ (2-0) / 3 * (0 + 4 + 4 * (0.25 + 1 + 1.75) + 2 * (0.25 + 1)) ≈ 2.33333

The actual value of the integral is approximately π^2/3 ≈ 2.33333. The Trapezoidal Rule
gives an error of approximately ±0.08333, while Simpson's 1/3 Rule gives an error of
approximately ±10^-8.

HiCollege Click Here For More Notes 10

You might also like