0% found this document useful (0 votes)
30 views

Computational Methods Chapter 3

Uploaded by

natnael.tamirat
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Computational Methods Chapter 3

Uploaded by

natnael.tamirat
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Chapter 3 System of Linear Equations, Eigen Values and

Eigen Vectors

Mulat Tigabu

Addis Abeba Science and Technology University


College of Engineering
Department of Electromechanical Engineering
Addis Abeba, Ethiopia
[email protected]

November 10, 2024

Mulat Tigabu Computational Methods November 10, 2024 1 / 55


Goals and Objectives
Be familiar with terminology: forward elimination, back substitution,
pivot equation, and pivot coefficient.
Understand the problems of division by zero, round-off error, and
ill-conditioning.
Know how to compute the determinant using Gauss elimination.
Understand the advantages of pivoting; realize the difference between
partial and complete pivoting.
Know the fundamental difference between Gauss elimination and the
Gauss-Jordan method and which is more efficient.
Recognize how Gauss elimination can be formulated as an LU
decomposition.
Know how to incorporate pivoting and matrix inversion into an LU
decomposition algorithm.
Know how to interpret the elements of the matrix inverse in evaluating
stimulus-response computations in engineering.
Realize how to use the inverse and matrix norms to evaluate system
condition.
Mulat Tigabu Computational Methods November 10, 2024 2 / 55
Introduction Linear Equations
Linear systems of equations are associated with many problems in
engineering and science, as well as with applications of mathematics
to the social sciences and quantitative study of business and economic
problems.
Linear algebraic equations occur in almost all branches of engineering.
Their most important application in engineering is in the analysis of
linear systems (any system whose response is proportional to the
input is deemed to be linear). Linear systems include structures,
elastic solids, heat flow, seepage of fluids, electromagnetic
fields and electric circuits.
A system of algebraic equations has the form

Mulat Tigabu Computational Methods November 10, 2024 3 / 55


Continued
where the coefficients aij and the constants bj are known and xi
represents the unknowns. In matrix notation, the equations are written

as
A system of linear equations in n unknowns has a unique solution,
provided that the determinant of the coefficient matrix is nonsingular,
i.e., if |A| =
̸ 0.
The rows and columns of a nonsingular matrix are linearly independent
in the sense that no row (or column) is a linear combination of the
other rows (or columns).
Mulat Tigabu Computational Methods November 10, 2024 4 / 55
Continued

If the coefficient matrix is singular, the equations may have infinite


number of solutions, or no solutions at all, depending on the constant
vector.
Summarizing the modeling of linear systems invariably gives rise to
equations of the form Ax = b, where b is the input and x represents
the response of the system. The coefficient matrix A, which reflects
the characteristics of the system, is independent of the input. In other
words, if the input is changed, the equations have to be solved again
with a different b, but the same A.

Mulat Tigabu Computational Methods November 10, 2024 5 / 55


Methods of Solution

There are two classes of methods for solving system of linear, algebraic
equations: Direct and Iterative Methods.
The common characteristics of direct methods are that they transform
the original equation into equivalent equations (equations that have
the same solution) that can be solved more easily.
The transformation is carried out by applying certain operations. The
solution does not contain any truncation errors but the round-off
errors is introduced due to floating point operations.
Iterative or indirect methods, start with a guess of the solution x,
and then repeatedly refine the solution until a certain convergence
criterion is reached.
Iterative methods are generally less efficient than direct methods due
to the large number of operations or iterations required.

Mulat Tigabu Computational Methods November 10, 2024 6 / 55


Direct Methods:
Matrix Inverse Method
Gauss Elimination Method
Gauss-Jordan Method
Cholesky’s Triangularization Method
Crout’s Method
Thomas Algorithm for Tridiagonal System
Indirect or Iterative Methods:
Jacobi’s Iteration Method
Gauss-Seidel Iteration Method

Mulat Tigabu Computational Methods November 10, 2024 7 / 55


The Inverse of A Matrix
If A and B are m × n matrices such that
AB = BA = I (1)
then B is said to be the inverse of A and is denoted by
B = A−1
then to find the inverse
AdjA
A ∗ AdjA = |A| ∗ I =⇒ A−1 = (2)
Det(A)
If det A is equal to zero, then the elements of A−1 approach infinity (or are
indeterminant at best), in which case the inverse A−1 is said not to exist,
and the matrix A is said to be singular. The inverse of a matrix exists only
if determinant is not zero, that is, the matrix must be nonsingular.
The requirements for obtaining a unique inverse of a matrix are:
The matrix is a square matrix.
The determinant of the matrix is not zero (the matrix is nonsingular).
Mulat Tigabu Computational Methods November 10, 2024 8 / 55
Mulat Tigabu Computational Methods November 10, 2024 9 / 55
Matrix Inversion Method

Mulat Tigabu Computational Methods November 10, 2024 10 / 55


Mulat Tigabu Computational Methods November 10, 2024 11 / 55
Mulat Tigabu Computational Methods November 10, 2024 12 / 55
Mulat Tigabu Computational Methods November 10, 2024 13 / 55
Gauss Elimination Method

Consider the following system of linear simultaneous equations:

Gauss elimination is a popular technique for solving simultaneous


linear algebraic equations. It reduces the coefficient matrix into an
upper triangular matrix through a sequence of operations carried out
on the matrix.
The vector b is also modified in the process. The solution vector x is
obtained from a backward substitution procedure.

Mulat Tigabu Computational Methods November 10, 2024 14 / 55


Mulat Tigabu Computational Methods November 10, 2024 15 / 55
Mulat Tigabu Computational Methods November 10, 2024 16 / 55
Mulat Tigabu Computational Methods November 10, 2024 17 / 55
Mulat Tigabu Computational Methods November 10, 2024 18 / 55
Mulat Tigabu Computational Methods November 10, 2024 19 / 55
Mulat Tigabu Computational Methods November 10, 2024 20 / 55
Example 3 Electrical Circut

Mulat Tigabu Computational Methods November 10, 2024 21 / 55


Mulat Tigabu Computational Methods November 10, 2024 22 / 55
Mulat Tigabu Computational Methods November 10, 2024 23 / 55
Mulat Tigabu Computational Methods November 10, 2024 24 / 55
Mulat Tigabu Computational Methods November 10, 2024 25 / 55
Gauss Elimination with Pivoting

Mulat Tigabu Computational Methods November 10, 2024 26 / 55


Gauss-Jordan Method

The Gauss-Jordan method is an extension of the Gauss elimination


method.
The set of equations Ax = b is reduced to a diagonal set Ix = b,
where I is a unit matrix. This is equivalent to x = b.
The solution vector is therefore obtained directly from b.
The Gauss-Jordan method implements the same series of operations
as implemented by the Gauss elimination process.
The main difference is that it applies these operations below as well as
above the diagonal such that all off-diagonal elements of the matrix
are reduced to zero.
The Gauss-Jordan method also provides the inverse of the coefficient
matrix A along with the solution vector x.
The Gauss-Jordan method is highly used due to its stability and direct
procedure.

Mulat Tigabu Computational Methods November 10, 2024 27 / 55


The Gauss-Jordan method requires more computational effort than the
Gauss elimination process.
The Gauss-Jordan method is a modification of the Gauss elimination
method. The series of operations performed are quite similar to the
Gauss elimination method.
In the Gauss elimination method, an upper triangular matrix is derived
while in the Gauss-Jordan method an identity matrix is derived.
Hence, back substitutions are not required.
Mulat Tigabu Computational Methods November 10, 2024 28 / 55
Mulat Tigabu Computational Methods November 10, 2024 29 / 55
Mulat Tigabu Computational Methods November 10, 2024 30 / 55
Mulat Tigabu Computational Methods November 10, 2024 31 / 55
Mulat Tigabu Computational Methods November 10, 2024 32 / 55
Mulat Tigabu Computational Methods November 10, 2024 33 / 55
Mulat Tigabu Computational Methods November 10, 2024 34 / 55
Mulat Tigabu Computational Methods November 10, 2024 35 / 55
Mulat Tigabu Computational Methods November 10, 2024 36 / 55
Mulat Tigabu Computational Methods November 10, 2024 37 / 55
Mulat Tigabu Computational Methods November 10, 2024 38 / 55
LU Decomposition

It is possible to show that any square matrix A can be expressed as a


product of a lower triangular matrix L and an upper triangular matrix
U.

The process of computing L and U for a given A is known as LU


Decomposition or LU Factorization. LU decomposition is not
unique (the combinations of L and U for a prescribed A are endless),
unless certain constraints are placed on L or U. These constraints
distinguish one type of decomposition from another.

Mulat Tigabu Computational Methods November 10, 2024 39 / 55


Mulat Tigabu Computational Methods November 10, 2024 40 / 55
Mulat Tigabu Computational Methods November 10, 2024 41 / 55
Mulat Tigabu Computational Methods November 10, 2024 42 / 55
Mulat Tigabu Computational Methods November 10, 2024 43 / 55
Mulat Tigabu Computational Methods November 10, 2024 44 / 55
Mulat Tigabu Computational Methods November 10, 2024 45 / 55
GAUSS-SEIDEL ITERATION METHOD

Mulat Tigabu Computational Methods November 10, 2024 46 / 55


Mulat Tigabu Computational Methods November 10, 2024 47 / 55
Mulat Tigabu Computational Methods November 10, 2024 48 / 55
Mulat Tigabu Computational Methods November 10, 2024 49 / 55
EigenValues and EigenVectors

If there is a number λ ∈ R and an n-vector x ̸= 0 such that Ax = λx,


then we say that λ is an eigenvalue for A, and x is called an
eigenvector for A with eigenvalue λ.Note that eigenvalues are numbers
while eigenvectors are vectors.
The set of all eigenvectors of A for a given eigenvalue λ is called an
eigenspace, and it is written Eλ(A) .
Eigenvalues are the special set of scalars associated with the system of
linear equations. It is mostly used in matrix equations. ‘Eigen’ is a
German word that means ‘proper’ or ‘characteristic’. Therefore, the
term eigenvalue can be termed as characteristic value, characteristic
root, proper values or latent roots as well. In simple words, the
eigenvalue is a scalar that is used to transform the eigenvector. The
basic equation is
Ax = λx
where: A is a square matrix, x is the eigenvector, λ is the eigenvalue.
Mulat Tigabu Computational Methods November 10, 2024 50 / 55
EigenValues and EigenVectors

Eigenvalues and eigenvectors provide a way to understand how a


matrix behaves when it transforms a vector.
An eigenvector of a matrix is a vector that only changes in magnitude
(not direction) when a matrix is applied to it. The factor by which it
is scaled is called the eigenvalue.
This can be viewed as the input vector x to the transformation Ax
results in the output of scaled parallel vector x.
The trace of a matrix (sum of diagonal elements) equals the sum of
its eigenvalues.
The determinant is the product of the eigenvalues.
These properties provide valuable information to check calculations,
understand the matrix’s behavior, and are invariant to a change in
basis.

Mulat Tigabu Computational Methods November 10, 2024 51 / 55


Why Are Eigenvalues and Eigenvectors Important?

Eigenvalues and eigenvectors are important because they provide


insight into the behavior of matrices, which are used to represent
systems of equations, transformations, 2nd-order tensors, and other
phenomena.
Understanding the eigenstructure of a matrix helps to:
Determine stability in dynamic systems.
Simplify computations, such as finding powers of matrices and
diagonalization of matrices for solving systems of ordinary differential
equations.
Determine principal normal stresses and stretches for stress and strain
tensors in continuum mechanics.
Determine matrix invariants under change of basis transformation or
coordinate rotation.
Analyze data, for example, in Principal Component Analysis (PCA), for
dimensionality reduction.

Mulat Tigabu Computational Methods November 10, 2024 52 / 55


Mulat Tigabu Computational Methods November 10, 2024 53 / 55
Mulat Tigabu Computational Methods November 10, 2024 54 / 55
Mulat Tigabu Computational Methods November 10, 2024 55 / 55

You might also like