0% found this document useful (0 votes)
7 views24 pages

Algebra 2 Lecteur Notes

The document provides definitions and examples of algebraic structures, including groups, rings, fields, and vector spaces. It explains the properties and axioms that define these structures, such as commutativity, associativity, identity elements, and inverse elements. Additionally, it discusses vector subspaces, linear combinations, and spans, illustrating these concepts with examples.

Uploaded by

talebali675
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views24 pages

Algebra 2 Lecteur Notes

The document provides definitions and examples of algebraic structures, including groups, rings, fields, and vector spaces. It explains the properties and axioms that define these structures, such as commutativity, associativity, identity elements, and inverse elements. Additionally, it discusses vector subspaces, linear combinations, and spans, illustrating these concepts with examples.

Uploaded by

talebali675
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

I-Vector spaces

I.1- Algebric sturecturs

Defintion :
Let 𝐺 be a set, we call a intern composition law on 𝐺 any function from 𝐺 × 𝐺 to 𝐺.
Internal composition law on 𝐺 is often denoted by (⋆).

Example :
Addition (+) is a Internal composition law on ℝ represented as follows :
+: ℝ × ℝ → ℝ

(𝑥, 𝑦) → 𝑥 + 𝑦

Defintion :
Let 𝐺 be a set and (⋆) be an operation on 𝐺. Then

- ⋆ is to be called commutative iff :


∀ x, y ∈ 𝐺 x ⋆ y = y ⋆ x
- ⋆ is to be called associative iff :

∀x, y, z ∈ 𝐺 x ⋆ (y ⋆ z) = x ⋆ (y ⋆ z)
- ⋆ has an identity element (neutral element ) if :

∃e ∈ 𝐺, ∀x ∈ 𝐺 x ⋆ e = e ⋆ x = x
- Each x ∈ 𝐺 , an element x ′ ∈ 𝐺 is called the symmetric or inverse of x iff :
x ⋆ x′ = x′ ⋆ x = e
where e ∈ G is the identity element .

I.1.1- Groups :

Definition :
A group is a set 𝐺 combined with an internal composition law , denoted by ⋆ ,
such that :
- ⋆ has an identity element.
- Each element in 𝐺 has an inverse element in 𝐺 .
- ⋆ is associative.
if , in addition ⋆ is commutative then (𝐺,⋆ ) is called a commutative or abelien
group.
Examlpe :
1- (ℝ, + ) is commutative group because
a- ∀x, y, z ∈ ℝ x + (y + z) = x + (y + z).(associative)
b- The neutral element for (+)on ℝ is 0.
c- Each element 𝑥 of ℝ has a symmetric (−𝑥) for the intern law (+).
d- The + is commutaive on ℝ.
2- (ℝ, . ) is not group because 0 does not have an inverse element.
3- (ℤ, + ) is commutative group.

I.1.2- Ring :

Definition :
Let 𝐴 be a set combined with two internal laws denoted as (∗) and (δ).
We say that the tripe (𝐴,∗, δ ) is a ring if
- (𝐴,∗) is commutative group.
- For all 𝑥, 𝑦 , 𝑧 ∈ 𝐴
𝑥 𝛿 (𝑦 ∗ 𝑧) = (𝑥 𝛿 𝑦) ∗ (𝑥 𝛿 𝑧)
and (𝑥 ∗ 𝑦)𝛿 𝑧 = (𝑥 𝛿 𝑧) ∗ (𝑦 𝛿 𝑧)
which represents left and right distributivity
- 𝛿 is associative.
If, in addition, 𝛿 is commutative, we call (𝐴,∗, δ) a commutative ring. If δ has
neutral element, we refer to (𝐴,∗, δ) as a unitary ring

Example :
(ℝ, +, . ) , (ℂ, +, . ) are rings.

I.1.3- Field :

Definition :
Let 𝕂 be a set combiend with two internal laws (∗) and (𝛿). We say that the
triple (𝕂,∗, δ) is a field if :
- (𝕂,∗, δ) is unitary ring.
- (𝕂 − {𝑒}, δ) is group, where 𝑒 is the neutral element of (∗).
If, in addition , δ is commutative, then we refer to (𝕂,∗, δ) as commutative field.

Eample :
(𝕂, +, . ) is commutative field because :
- (𝕂, +, . ) Is ring.
- 1 is neutral element of the multiplication (. ) over ℝ.
then (ℝ, +, . ) is unitary ring
- Each element of (ℝ − {0}) has a symmtric for the law (. )
1
∀𝑥 ∈ ℝ∗ , ∃𝑦 ∈ ℝ∗ 𝑥. 𝑦 = 𝑦. 𝑥 = 1 ⇒ 𝑦 =
𝑥
- (. ) is associative
∀𝑥, 𝑦, 𝑧 ∈ ℝ 𝑥. (𝑦. 𝑧) = (𝑥. 𝑦). 𝑧

I.1.4- Vector Spaces and Subspaces :


Let 𝕂 be a commutative field (typically ℝ 𝑜𝑟 ℂ), and let 𝐸 be non-emty set
combiend with intern law (+) :

(+): E × E → E
(𝑥, 𝑦) → 𝑥 + 𝑦

and an extrenal law (. ) (called scalar multiplication(s-multiplication)) :

(. ): 𝕂 × 𝐸 → 𝐸
(𝜆, 𝑥) → 𝜆. 𝑥

Definition :
A vector space over the field 𝕂(i.e 𝕂-vector space )is a triple (𝐸, +, . ) that satisfay tha following :
1- (𝐸, +) is commutative group.
2- ∀𝜆 ∈ 𝕂, ∀ 𝑢, 𝑣 ∈ 𝐸, 𝜆 . (𝑢 + 𝑣) = 𝜆 . 𝑢 + 𝜆 . 𝑣
3- ∀𝜆, 𝜇 ∈ 𝕂, ∀ 𝑢 ∈ 𝐸, (𝜆 + 𝜇) . 𝑢 = 𝜆 . 𝑢 + 𝜇 . 𝑢
4- ∀𝜆, 𝜇 ∈ 𝕂, ∀ 𝑢 ∈ 𝐸, (𝜆 . 𝜇) . 𝑢 = 𝜆 . ( 𝜇 . 𝑢)
5- ∀𝑢 ∈ 𝐸, 1𝕂 . 𝑢 = 𝑢

Remarks :
The element of vector space are called vectors and the element of 𝕂 are called scalars.

If 𝕂 = ℝ then E vector space is called « real vector space » and 𝕂 = ℂ then E is called
« complex vector space ».

Example :
(𝑨(ℝ, ℝ), +, . ) where 𝑨(ℝ, ℝ) is the set of the real value function from ℝ to ℝ is vector
space .
(ℝ[𝑋], +, . ) The set of polynomials with coeficients in ℝ is vector space over 𝕂 =
ℝ field.

(ℝ2 , +, . ) is vector space .


(ℝ𝑛 , +, . ) is 𝑅-vector space ( 𝑛 ∈ ℕ∗ ).

Definition : (vector subspace)


Let 𝐸 be a 𝕂-vector space and 𝐹 is subset of 𝐸 (𝐹 ⊂ 𝐸). We call 𝐹 a vector subspace if it has the
structer of vector space which means if 𝐹 satisfay the axioms of vector space (in the previous
Definition ) with same internal and extrenal law of 𝐸.
Theorme :
Let 𝐸 be a 𝐾-vector space and 𝐹 ⊂ 𝐸. 𝐹is said to be vector subspace iff :
1. 0𝐸 ∈ 𝐹.
2. ∀ 𝑢, 𝑣 ∈ 𝐹: 𝑢 + 𝑣 ∈ 𝐹
3. ∀ 𝑢 ∈ 𝐹, ∀𝜆 ∈ 𝕂 ∶ 𝜆𝑢 ∈ 𝐹.

Example :
The set 𝐹 = {(𝑥, 𝑦, 𝑧) ∈ ℝ3 ; 𝑥 = 𝑦 + 2𝑧} is vector subspace.
For 𝐸 vector space, {0𝐸 }is always vector subspace

I.1.5-Sum Vector Subspaces :


Defenition :

Let 𝐸 be a 𝐾-vector space and 𝐹1 , 𝐹2 … 𝐹𝑖 (𝑖 ∈ ℕ, 𝑖 ≥ 1) are vector subspaces of


𝐸. The sum of 𝐹1 , 𝐹2 … 𝐹𝑖 is the set of all possible elements of 𝐹1 , 𝐹2 … 𝐹𝑛
𝐹1 + 𝐹2 + ⋯ +𝐹𝑖 = {𝑢1 + 𝑢2 … 𝑢𝑖 | 𝑢1 ∈ 𝐹1 … 𝑢𝑖 ∈ 𝐹𝑖 }

Example : Let
𝐹 = {(𝑥, 0,0) ∈ ℝ3 | 𝑥 ∈ ℝ}
𝐺 = {(0,0, 𝑧) ∈ 𝑅3 | 𝑧 ∈ ℝ}
Then 𝐹 + 𝐺 = {(𝑥, 0, 𝑧) ∈ ℝ3 | 𝑥, 𝑧 ∈ ℝ}

Definition : (Theorem :Direct sum)


Let 𝐹1 and 𝐹2 be a subspaces of vector space 𝐸. The vector space 𝐸 is said to be a direct
sum of 𝐹1 and 𝐹2 iff :

1. 𝐸 = 𝐹1 + 𝐹2
2. 𝐹1 ∩ 𝐹2 = {0𝐸 }
and we write 𝑬 = 𝑭𝟏 ⊕ 𝑭𝟐

Example
Suppose 𝐹1 is the subspace of ℝ2 of those vector whose last coordinate equals 0,

and 𝐹2 is the subspace of ℝ2 of those vectors whose first coordinate equals 0,


𝐹1 = {(𝑥, 0) ∈ ℝ2 | 𝑥 ∈ ℝ} 𝐹2 = {(0, 𝑦) ∈ ℝ2 | 𝑦 ∈ ℝ}
1. 𝐹1 + 𝐹2 = {(𝑥, 0) + (0, 𝑦) = (𝑥, 𝑦)| 𝑥, 𝑦 ∈ ℝ} = ℝ2
2. 𝐹1 ∩ 𝐹2 = {(0,0)}

Then 𝑅2 = F1 ⊕ F2 .
I.2- Vector spaces Bases

I.2-1-Linear combinations and spans

Definition : (Linear combination)

A linear combination of list (𝒖𝟏 , 𝒖𝟐 … 𝒖𝒏 ) of vectors in 𝑬 is a vector of the form


𝜆1 𝑢1 + 𝜆2 𝑢2 + ⋯ + 𝜆𝑛 𝑢𝑛
Where 𝜆1 𝜆2 , … 𝜆𝑛 ∈ 𝕂

Example
- (17, −4,5) is linear combination of the list of vectors ((2,1, −3), (1, −2,4))
because i can find 𝜆1 = 6, 𝜆2 = 5 where (17, −4,2) = 6(2,1, −3) + 5(1, −2,4).
- (17, −4,5)is not a linear combination of ((2,1, −3), (1, −2,4))because we can not
find any 𝜆1 𝜆2 ∈ ℝ satisfy :
(17, −4,5) = 𝜆1 (2,1, −3) + 𝜆2 (1, −2,4)

17 = 2𝜆1 + 𝜆2
(𝑚𝑜𝑟𝑒 𝑒𝑥𝑎𝑐𝑡𝑙𝑦 { −4 = 𝜆1 − 2𝜆2 } has no solutions)
5 = −3𝜆1 − 4𝜆2

Definition : (Spans)
The set of all linear compinations of the list of vectors (𝑢1 , 𝑢2 … 𝑢𝑛 ) in 𝑬 is called span of
𝑢1 , 𝑢2 … 𝑢𝑛 denoted span(𝒖𝟏 , 𝒖𝟐 … 𝒖𝒏 ).In other words :
span(𝒖𝟏 , 𝒖𝟐 … 𝒖𝒏 ) = {𝝀𝟏 𝒖𝟏 + 𝝀𝟐 𝒖𝟐 + ⋯ + 𝝀𝒏 𝒖𝒏 |𝜆1 𝜆2 , … 𝜆𝑛 ∈ 𝕂}

Example
- (17, −4,2) ∈ 𝑠𝑝𝑎𝑛((2,1, −3), (1, −2,4))
- (17, −4,5) ∉ 𝑠𝑝𝑎𝑛((2,1, −3), (1, −2,4))
(2,1, −3) ∈ 𝑠𝑝𝑎𝑛((2,1, −3), (1, −2,4)) because ((2,1, −3) = 𝜆1 (2,1, −3) +
𝜆2 (1, −2,4) where 𝜆1 = 1 𝑎𝑛𝑑 𝜆2 = 0)

Properties
Let (𝑢1 , 𝑢2 … 𝑢𝑛 )be list of vectors in 𝑬 then
. ∀ 𝑖 ∈ {1, … , 𝑛} , 𝑢𝑖 ∈ 𝑠𝑝𝑎𝑛 (𝑢1 , 𝑢2 … 𝑢𝑛 )
. 𝑠𝑝𝑎𝑛 (𝑢1 , 𝑢2 … 𝑢𝑛 ) is a subspace of 𝑬
.If 𝐹 ⊂ 𝐸 is subspace such that 𝑢1 , 𝑢2 … 𝑢𝑛 ∈ 𝐹 then 𝑠𝑝𝑎𝑛 (𝑢1 , 𝑢2 … 𝑢𝑛 ) ∈ 𝐹

Definition :
If 𝑠𝑝𝑎𝑛 (𝑢1 , 𝑢2 … 𝑢𝑛 ) = 𝐸 then we say that (𝑢1 , 𝑢2 … 𝑢𝑛 ) spans 𝐸 and we call 𝐸 finit
dimensinal space .

Example
The list of usual vectors ((1,0,0,0), (0,1,0,0), (0,0,1,0), (0,0,0,1))𝑠𝑝𝑎𝑛𝑠 ℝ4
I.2-2-Linear Independance

Definition :
A list of vectors (𝑢1 , 𝑢2 … 𝑢𝑛 ) is called linearly independent if the only solution for 𝜆1 𝜆2 , … 𝜆𝑛 ∈ 𝕂 of
the equation 𝝀𝟏 𝒖𝟏 + 𝝀𝟐 𝒖𝟐 + ⋯ + 𝝀𝒏 𝒖𝒏 = 𝟎𝑬 is 𝜆1 = 𝜆2 = ⋯ = 𝜆𝑛 = 0
(𝑢1 , 𝑢2 … 𝑢𝑛 ) is called linearly dependent if there exist 𝜆1 𝜆2 , … 𝜆𝑛 ∈ 𝕂 not all zero, such that

𝝀𝟏 𝒖 𝟏 + 𝝀𝟐 𝒖 𝟐 + ⋯ + 𝝀𝒏 𝒖 𝒏 = 𝟎 𝑬 .

Example
. The list of vectors ((1,0,0), (0,1,0), (0,0,1)) is linearly independent in ℝ3
. The list of vectors ((2,3,1), (1, −1,2), (7,3,8)) is linearly dependent in ℝ3 ( because i can find 𝜆1 =
2, 𝜆2 = 3, 𝜆3 = −1 where 𝜆1 (2,3,1) + 𝜆2 (1, −1,2) + 𝜆3 (7,3,8) = (0,0,0) )

I.2-3-Bases

Definition :
A list of vectors (𝑢1 , 𝑢2 … 𝑢𝑛 ) is a basis for the finite-dimensional vector space 𝐸 if :

1- (𝑢1 , 𝑢2 … 𝑢𝑛 ) is linearly independent.


2- 𝐸= 𝑠𝑝𝑎𝑛 (𝑢1 , 𝑢2 … 𝑢𝑛 ).

Example
- The list of vectors ((1,0,0),(0,1,0),(0,0,1)) of ℝ3 is a basis of ℝ3 (is called mostly
usual basis or standard basis of ℝ3 ).
- The list of vectors ((1,2),(1,1)) is a basis of ℝ2 .
Corallary : Every finite- dimonsional vector space has a basis

Teorem
Every linearly independent list of vectors is a finite-dimensional vector space 𝐸 can be
extended to a basis of 𝐸.

I.2-4-Dimension

Definition :
We call the length of any basis of 𝐸 the dimension of 𝐸, and we denote this by 𝒅𝒊𝒎𝑬

Theorem
Let 𝐸 be a vector space with 𝒅𝒊𝒎𝑬 = 𝒏
- If 𝐸= 𝑠𝑝𝑎𝑛 (𝑢1, 𝑢2 … 𝑢𝑛 ) then (𝑢1, 𝑢2 … 𝑢𝑛 ) is basis if 𝐸.
- If (𝑢1, 𝑢2 … 𝑢𝑛 ) is linearly independent in 𝐸 then (𝑢1, 𝑢2 … 𝑢𝑛 ) is basis of 𝐸.

Properties
If 𝐹 and 𝐺 are subspaces of finite-dimensional vector space 𝐸 then :
𝑑𝑖𝑚(𝐹 + 𝐺 ) = 𝑑𝑖𝑚(𝐹 ) + 𝑑𝑖𝑚(𝐺 ) − 𝑑𝑖𝑚(𝐹 ∩ 𝐺)

I.3- Linear maps


Throught this part 𝐸 and 𝐹 denoted vector spaces over 𝕂 and 𝑛 ∈ ℕ∗

Definition :
A linear map from 𝐸 to 𝐹 is a function 𝑻: 𝐸 → 𝐹
That satisfy :

1- Additivity : 𝑇(𝑢 + 𝑣) = 𝑇(𝑢) + 𝑇(𝑣) for all 𝑢, 𝑣 ∈ 𝐸


2- Homogeneity :𝑇(𝜆𝑢) = 𝜆𝑇(𝑢) for all 𝜆 ∈ 𝕂 and 𝑢 ∈ 𝐸

Remarks
The proporties 1 and 2 can be written as one 𝑇 (𝜆𝑢 + 𝑣) = 𝜆𝑇(𝑢) + 𝑇(𝑣) 𝑓𝑜𝑟 𝑎𝑙𝑙 𝜆 ∈
𝕂 , 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑢, 𝑣 ∈ 𝐸

- The set of all linear maps from 𝐸 to 𝐹 is denoted by 𝑳(𝑬, 𝑭). Moreover if 𝐸 = 𝐹,
then we write 𝑳(𝑬, 𝑬) = 𝑳(𝑬) and 𝑇 is called in 𝑳(𝑬) a linear operator on 𝐸

Example
- The zero map 𝟎: 𝐸 → 𝐹 mapping every element 𝑢 ∈ 𝐸 to 0 ∈ 𝐹 is linear
- The identety map 𝑰: (𝟏) 𝐸 → 𝐸 defined as 𝑰(𝑢) = 𝑢 is linear
ℝ2 → ℝ2
- Let 𝑇:
(𝑥, 𝑦) → (𝑥 − 2𝑦, 3𝑥 + 𝑦)
Check the linearty of 𝑇:
Then for (𝑥, 𝑦), (𝑥 ′ , 𝑦 ′ ) ∈ ℝ2 we have :

𝑇((𝑥, 𝑦) + (𝑥 ′ , 𝑦 ′ )) = 𝑇(𝑥 + 𝑥 ′ , 𝑦 + 𝑦 ′ )
= ((𝑥 + 𝑥 ′ ) − 2(𝑦 + 𝑦 ′ ), 3(𝑥 + 𝑥 ′ ) + (𝑦 + 𝑦 ′ )
= (𝑥 − 2𝑦, 3𝑥 + 𝑦) + (𝑥 ′ − 2𝑦 ′ , 3𝑥 ′ + 𝑦 ′ )
= 𝑇(𝑥, 𝑦) + 𝑇(𝑥 ′ , 𝑦 ′ )
Similarly for (𝑥, 𝑦) ∈ ℝ2 𝑎𝑛𝑑 𝜆 ∈ ℝ we have :

𝑇(𝜆(𝑥, 𝑦)) = 𝑇(𝜆𝑥, 𝜆𝑦)


= (𝜆𝑥 − 2𝜆𝑦, 3𝜆𝑥 + 𝜆𝑦)
= 𝜆(𝑥 − 2𝑦, 3𝑥 + 𝑦) = 𝜆𝑇(𝑥, 𝑦)
𝑇 is linear map (operator)

Definition :(Kernel of linear map)


Let 𝑇: 𝐸 → 𝐹 be a linear map, then the kernal (or null space) of 𝑇 is the set of all vectors in 𝐸
such that : 𝒌𝒆𝒓𝑻 = {𝑢 ∈ 𝐸|𝑇(𝑢) = 𝟎𝐸 }

Example
We recall 𝑇 from perivous example

𝑘𝑒𝑟𝑇 = {𝑢 ∈ ℝ2 | 𝑇(𝑢) = 0ℝ2 } = {(𝑥, 𝑦) ∈ ℝ2 | (𝑥 − 2𝑦, 3𝑥 + 𝑦) = (0,0)}


𝐾𝑒𝑟𝑇 = (0,0)
If 𝑇 ∈ 𝑳(𝑬, 𝑭)where 𝑇(𝑢) = 0 for every 𝑢 ∈ 𝐸 then 𝑘𝑒𝑟𝑇 = 𝐸

Theorem (the kernel is a subspace)


Let 𝑇 ∈ 𝑳(𝑬, 𝑭), then 𝑘𝑒𝑟𝑇 is a subspace of 𝐸

Proof 𝑇 is linear map then

𝑻(0) = 0 𝑡ℎ𝑢𝑠 0 ∈ 𝑘𝑒𝑟𝑇


𝑆𝑢𝑝𝑝𝑜𝑠𝑒 𝑢, 𝑣 ∈ 𝑘𝑒𝑟𝑇 𝑡ℎ𝑒𝑛
𝑇(𝑢 + 𝑣) = 𝑇(𝑢) + 𝑇(𝑣) = 0 + 0 = 0
𝐻𝑒𝑛𝑐𝑒 𝑢 + 𝑣 ∈ 𝑘𝑒𝑟𝑇. 𝑡ℎ𝑢𝑠 𝑘𝑒𝑟𝑇𝑖𝑠 𝑐𝑙𝑜𝑠𝑒𝑑 𝑢𝑛𝑑𝑒𝑟 𝑎𝑑𝑑𝑖𝑡𝑖𝑜𝑛
𝑆𝑢𝑝𝑝𝑜𝑠𝑒 𝑢 ∈ 𝑘𝑒𝑟𝑇 𝑎𝑛𝑑 𝜆 ∈ ℝ 𝑡ℎ𝑒𝑛
𝑡ℎ𝑢𝑠 𝜆𝑢 ∈ 𝑘𝑒𝑟𝑇

Hence 𝑘𝑒𝑟𝑇 is a subspace of 𝐸

Theorem
Let 𝑇 ∈ 𝑳(𝑬, 𝑭) then :
𝑇 is injective ⇔ 𝐾𝑒𝑟𝑇 = 0𝐸

Theorem
Let 𝑇 ∈ 𝑳(𝑬, 𝑭), then 𝑟𝑎𝑛𝑔𝑒(𝑇) ( some mathematicians use notation 𝐼𝑚(𝑇)) is a
subspace of 𝐹

Definition :
A linear map 𝑇: 𝐸 → 𝐹 is called surjective if 𝑟𝑎𝑛𝑔𝑒(𝑇) = 𝐹
A linear map 𝑇: 𝐸 → 𝐹 is called bijective if 𝑇 is both injective and surjective

Theorem
Suppose 𝐸 is finite –dimensional vector space and 𝑇 ∈ 𝑳(𝑬, 𝑭) then 𝑟𝑎𝑛𝑔𝑒(𝑇) is finite-
dimonsional and 𝑑𝑖𝑚𝐸 = dim(𝑘𝑒𝑟𝑇) + dim(𝑟𝑎𝑛𝑔𝑒(𝑇))

Example
ℝ2 → ℝ2
𝑇:
(𝑥, 𝑦) → (𝑥 − 2𝑦, 3𝑥 + 𝑦)
Has ker(𝑇) = {0ℝ2 } and 𝑟𝑎𝑛𝑔𝑒(𝑇) = ℝ2
Then 𝑑𝑖𝑚ℝ2 = dim(𝑘𝑒𝑟𝑇) + dim(𝑟𝑎𝑛𝑔𝑒(𝑇)) = 0 + 2 = 2

Corallary

Let 𝑻 ∈ 𝑳(𝑬, 𝑭)

- If dim(𝐸 ) > dim(𝐹) then 𝑇 is not injective


- If dim(𝐸 ) < dim(𝐹) then 𝑇 is not surjective.
Ⅱ-Matrices

Throught this chapter 𝕂 is ℝ or ℂ ,𝑚 and 𝑛 are integers in ℕ∗ .

Ⅱ-1-Matrix of linear map


Let 𝐸, 𝐹 be 𝕂 −vector spases of dimensions 𝑛 and 𝑚 respectivly and let 𝐵 = {𝑒1 , 𝑒2 , … , 𝑒𝑛 } be
basis over 𝐸 and 𝐵′ = {𝑒1′ , 𝑒2′ , … , 𝑒𝑚

} is basis over 𝐹.

𝑇 is the linear map from 𝐸 to 𝐹, where the vectors 𝑇(𝑒1 ), 𝑇(𝑒2 ), … , 𝑇(𝑒𝑛 ) are vectors in 𝐹 and
𝐵′ is basis over 𝐹 (𝑓𝑜𝑟 𝑗 = 1, … , 𝑛)

𝑇(𝑒𝑗 ) = 𝜆1𝑗 𝑒1′ + 𝜆2𝑗 𝑒2′ + ⋯ + 𝜆𝑚𝑗 𝑒𝑚


the table
𝑇(𝑒1 ) 𝑇(𝑒2 ) … 𝑇(𝑒𝑛 )
… 𝜆1𝑛 𝑒1′
𝜆11 𝜆12
𝑒2′
𝜆21 𝜆22 … 𝜆2𝑛

⋮ ⋮ ⋮ ⋮
𝑒′
( 𝜆𝑚1 𝜆𝑚2 … 𝜆𝑚𝑛 ) 𝑚

Is called matrix of linear map 𝑇 related to the basis 𝐵 and 𝐵′ is noted mostly 𝑀𝑎𝑡𝐵𝐵′ (𝑇)

Remarks :
1. If 𝐸 = 𝐹, then 𝑀𝑎𝑡𝐵 (𝑇).
2. The matrix 𝑀𝑎𝑡𝐵𝐵′ (𝑇) depends on the basis 𝐵 vectors number and the basis 𝐵′ .

Example :

Let 𝑇 be the linear map of ℝ3 to ℝ2 defined as follows


𝑇: ℝ3 → ℝ2
(𝑥, 𝑦, 𝑧) → (2𝑥 − 𝑦, 𝑥 + 𝑦 + 𝑧)

Let the usual basis of ℝ3 ∶ 𝐵 = {(1,0,0), (0,1,0), (0,0,1)} and 𝐵′ = {(1.0), (0,1)} the usual
basis of ℝ2 . Then

𝑇((1,0,0)) = (2,1) = 2 × (1.0) + (0,1).

𝑇((0,1,0)) = (−1,1) = −1 × (1.0) + (0,1).

𝑇((0,0,1)) = (0,1) = 0 × (1.0) + (0,1).

The matrix of 𝑇

2 −1 0
𝑀𝑎𝑡𝐵𝐵′ (𝑇) = ( )
1 1 1
2. let 𝑔 be a linear map over the ℝ3 [𝑋] into ℝ3 [𝑋] . where 𝑔 : 𝑝 → 𝑋 2 𝑝′′ + 𝑝(1) with the useal
basis of ℝ3 [𝑋] is given 𝐵 = {1, 𝑋, 𝑋 2 , 𝑋 3 }

𝑔(1) = 1, 𝑔(𝑋) = 1, 𝑔(𝑋 2 ) = 2𝑋 2 + 1 and 𝑔(𝑋 3 ) = 6𝑋 3 + 1.


then
1 1 0 1
𝑀𝑎𝑡𝐵 (𝑔) = (0 0 0 0)
0 0 2 0
0 0 0 6

Ⅱ-2-Matrix
Definition : we call a matrix of type (𝒎, 𝒏) (or dimensions × 𝒏 ) over 𝕂 field any numbers of the
form
𝑎11 𝑎12 … 𝑎1𝑛
𝑎21 𝑎22 … 𝑎2𝑛
𝐴=( ⋮ ⋮ ⋮ )
𝑎𝑚1 𝑎𝑚2 … 𝑎𝑚𝑛

𝐴 = (𝑎𝑖𝑗 ) ( where 1 ≤ 𝑖 ≤ 𝑚 , 1 ≤ 𝑗 ≤ 𝑛) is a called a rectangular matrix where 𝒎 the rows


number and 𝒏 the columns number. The numbers 𝑎11 , 𝑎12 , … . 𝑎𝑚𝑛 are knowen as elements of the
matrix 𝐴, In any matrix the elment 𝑎𝑖𝑗 the first suffix 𝒊 indecates that the element stands in the
𝑖 𝑡ℎ row and the second suffix 𝒋 indecates that the element stands in the 𝑗 𝑡ℎ column.
Notation

 The set of matrices over 𝐾 of type (𝒎, 𝒏) denoted by 𝑴𝒎,𝒏 (𝕂), if 𝑚 = 𝑛 the set
denoted by 𝑴𝒏 (𝕂).
 If 𝑚 = 𝑛 the matrix 𝐴 = (𝑎𝑖𝑗 ) is a called square matrix
𝒂𝟏𝟏
𝒂
 If 𝑛 = 1 then 𝐴 is called a column matrix (column vector) : 𝑨 = ( 𝟐𝟏 )

𝒂𝒎𝟏
 If 𝑚 = 1 then 𝐴 is called a row matrix (row vector) : 𝑨 = (𝒂𝟏𝟏 𝒂𝟏𝟐 … 𝒂𝟏𝒏 )
 The Zero matrix of the type (𝒎, 𝒏) is the matrix where all 𝑎𝑖𝑗 elements are equal to
0 it can be denoted by 𝟎𝒎×𝒏
𝟎 𝟎 𝟎
Example : 𝟎𝟑×𝟑 = (𝟎 𝟎 𝟎).
𝟎 𝟎 𝟎
Ⅱ-3-Operation on Matrices

Ⅱ-3-1. Equality of two matrices :


Two matrices 𝐴 and 𝐵 are said to be equal if

1. They are of same order (type)


2. Their corresponding elements are equal.
That is if 𝐴 = (𝑎𝑖𝑗 )𝑚×𝑛 and 𝐵 = (𝑏𝑖𝑗 )𝑚×𝑛 then 𝑎𝑖𝑗 = 𝑏𝑖𝑗 for all 𝑖 and 𝑗.

Ⅱ-3-2. Scalar multiplication of matrix:


Let 𝜆 be a scalar then scalar product of matrix 𝐴 = (𝑎𝑖𝑗 )𝑚×𝑛 given denoted by 𝜆𝐴
𝜆𝑎11 𝜆𝑎12 … 𝜆𝑎1𝑛
𝜆𝐴 = ( ⋮ ⋮ ⋱ ⋮ )
𝜆𝑎𝑚1 𝜆𝑎𝑚2 … 𝜆𝑎𝑚𝑛

Ⅱ-3-3. Addition of two matrices :


Let 𝐴 = (𝑎𝑖𝑗 )𝑚×𝑛 and 𝐵 = (𝑏𝑖𝑗 )𝑚×𝑛 are two matrices with same order then the sum 𝐴 + 𝐵 is given
by 𝐴 + 𝐵 = (𝑎𝑖𝑗 )𝑚×𝑛 + (𝑏𝑖𝑗 )𝑚×𝑛 = (𝑎𝑖𝑗 + 𝑏𝑖𝑗 )𝑚×𝑛

Ⅱ-3-4. Multiplication of two matrices :


Two matrices are said to be confirmable for product if 𝐴𝐵 if number of columns in 𝐴 is equal to the
number of rows in matrix 𝐵. Let 𝐶 = 𝐴𝐵 be the matrix product of the order 𝑚 × 𝑟 where
𝐴 = (𝑎𝑖𝑗 )𝑚×𝑛 and 𝐵 = (𝑏𝑖𝑗 )𝑛×𝑟

𝑐𝑖𝑗 = ∑𝑛𝑘=1 𝑎𝑖𝑘 𝑏𝑘𝑗 = 𝑎𝑖1 𝑏1𝑗 + 𝑎𝑖2 𝑏2𝑗 + … … … + 𝑎𝑖𝑛 𝑏𝑛𝑗

Theorem
Let 𝐸, 𝐹 𝑎𝑛𝑑 𝐺 be vector-spaces. 𝐵1 , 𝐵2 , 𝐵3 are basis of these vector spaces
Let 𝑇 ∈ 𝑳(𝑬, 𝑭) and 𝑆 ∈ 𝑳(𝑭, 𝑮)
𝑀𝑎𝑡𝐵1𝐵3 (𝑆°𝑇) = 𝑀𝑎𝑡𝐵2𝐵3 (𝑆)𝑀𝑎𝑡𝐵1𝐵2 (𝑇)

Ⅱ-3-5. Transpose:
The transpose of matrix 𝐴 = (𝑎𝑖𝑗 )𝑚×𝑛 writen 𝐴′ (𝐴′ 𝑜𝑟 𝐴𝑡 ) is the matrix obtained by writing the rows
of 𝐴 in order as columns. That

𝐴 = (𝑎𝑗𝑖 )𝑛×𝑚

Example (Ⅱ-3) :
Let 𝐴, 𝐵 :
1 3 −4 1 0 3 1
𝐴 = (0 −5 2 ) ,𝐵 = (0 2 1 4)
2 1 3 0 −3 −1 6

Calculate each case if it’s possible 𝐴 + 𝐵, 𝐴𝐵 , 𝐵𝐴, 𝐵′ , 𝐴′ , 𝐵 + 𝐴𝐵.

𝐴 + 𝐵 is not possible because 𝐴, 𝐵 are not of the same order .

1 3 −4 1 0 3 1
𝐴𝐵 = (0 −5 2 ) × (0 2 1 4)
2 1 3 0 −3 −1 6

𝐴𝐵
1 × 1 + 0 × 3 + 0 × −4 0 × 1 + 3 × 2 + (−3) × −4 1 × 3 + 3 × 1 + (−1) × −4 1 × 1 + 3 × 4 + 6 × −4
= (0 × 1 + 0 × −5 + 0 × 2 0 × 0 + (−5) × 2 + 2 × −3 0 × 3 + (−5) × 1 + (−1) × 2 0 × 0 + 4 × −5 + 6 × 2)
1×2+0×1+0×3 0 × 2 + 2 × 1 + 3 × −3 2 × 3 + 1 × 1 + (−1) × 3 2 × 1 + 1 × 4 + 6 × 3

1 18 10 −11
𝐴𝐵 = (0 −16 −7 −8 )
2 −7 4 24
𝐵𝐴 it’s impossible because 𝐴 𝑎𝑛𝑑 𝐵 are not confirmable.
1 0 0 1 0 2
𝐵′ = (0 2 −3), 𝐴′ = ( 3 −5 1)
3 1 −1
1 4 6 −4 2 3
1 0 3 1 1 18 10 −11
𝐵 + 𝐴𝐵 = (0 2 1 4 ) + (0 −16 −7 −8 )
0 −3 −1 6 2 −7 4 24
1+1 0 + 18 10 + 3 1 + (−11)
= (0 + 0 −16 + 2 1 + (−7) −8 + 4 )
2 + 0 −3 + (−7) 4 + (−1) 24 + 6
2 18 13 −10
𝐵 + 𝐴𝐵 = (0 −14 −6 −4 )
2 −10 3 30

Properties of some oprations


 (𝐴′ )′ = 𝐴
 (𝜆𝐴)′ = 𝜆𝐴′
 (𝐴 + 𝐵)′ = 𝐴′ + 𝐵′
 (𝐴𝐵)′ = 𝐵′ 𝐴′
 Matrix addition is commutative 𝐴 + 𝐵 = 𝐵 + 𝐴
 Matrix addition is associative 𝐴 + (𝐵 + 𝐶 ) = (𝐴 + 𝐵) + 𝐶
 The matrix product is distributative with respect to addition that :

𝐴(𝐵 + 𝐶 ) = 𝐴𝐵 + 𝐴𝐶
(𝐵 + 𝐶 )𝐴 = 𝐵𝐴 + 𝐶𝐴

 The matrix product is associative

A(BC) = (AB)C = ABC

 𝜆(𝐴𝐵) = (𝜆𝐴)𝐵 = 𝐴(𝜆𝐵)


 If 𝐴 𝑎𝑛𝑑 𝐵 are matrices of the same type and 𝐴𝐵 = 𝐵𝐴 we say 𝐴 𝑎𝑛𝑑 𝐵 commut.
In this case the can define (𝐴 + 𝐵)𝑃 the binomial formula for ∀𝑃 ∈ ℕ∗
𝑃
𝑃
(𝐴 + 𝐵)𝑃 = ∑ ( ) 𝐴𝑃−𝑘 𝐵𝑘
𝑘
𝑘=1

Ⅱ-4-Square Mrtices
As we defined privously the square matrix is the matrix of type 𝑛 × 𝑛
𝑎11 𝑎12 … 𝑎1𝑛
𝑎21 𝑎22 … 𝑎2𝑛
𝐴=( ⋮ )
⋮ ⋱ ⋮
𝑎𝑛1 𝑎𝑛2 … 𝑎𝑛𝑛

Ⅱ-4-1. Integral power of matrices:


Let 𝐴 be square matrix of the order 𝑛, and 𝑟 be positive integer then we define
𝐴𝑟 = 𝐴 × 𝐴× … … × 𝐴 (𝑟 𝑡𝑖𝑚𝑒 𝑚𝑢𝑙𝑡𝑖𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛)
𝐴2 = 𝐴 × 𝐴
𝐴3 = 𝐴 × 𝐴 × 𝐴

 The power of matrix are commutative that is

𝐴𝑟 𝐴𝑠 = 𝐴𝑟+𝑠 = 𝐴𝑠 𝐴𝑟

Ⅱ-4-2. Special Types of square matrices :


Definition :(Diagonal matrix)
A square matrix is 𝐴 = (𝑎𝑖𝑗 ) is called a diagonal matrix if each of its non-diagonal
𝑛×𝑛
element is zero (the diagonal element the elements of the indices 𝑖 = 𝑗 ). Example : 𝐴 =
1 0 0
(0 −9 0)
0 0 3
Definition :(Identity matrix)

A diagonal matrix whose diagonal elements are equal to 1 denoted by 𝑰𝑛

1 0 0
Example : 𝑰𝟑 = (0 1 0)
0 0 1

Definition :(Upper Triangular matrix)

A square matrix said to be a Upper tringular matrix if 𝑎𝑖𝑗 = 0 for 𝑖 > 𝑗.

5 7 4
Example : 𝐴 = (0 2 0)
0 0 3

Definition :(Lower Triangular matrix)

A square matrix said to be a Lower tringular matrix if 𝑎𝑖𝑗 = 0 for 𝑖 < 𝑗.

−1 0 0
Example : 𝐴 = ( 4 2 0)
5 8 3

Definition :(Symmetric matrix)

A square matrix is 𝐴 = (𝑎𝑖𝑗 )𝑛×𝑛 is said to be a symmetric if 𝐴 = 𝐴′

8 −2 7
Example : 𝐴 = (−2 −9 3)
7 3 5
𝐴 is a symmetric matrix.

Definition :(Skew- Symmetric matrix)

A square matrix is 𝐴 = (𝑎𝑖𝑗 )𝑛×𝑛 is said to be a skew- symmetric if the elements 𝑎𝑖𝑗 of the Uper
tringular part are equal in lower tringular part −𝑎𝑖𝑗 . Example :
8 −2 7
𝐵 = 2 −9 3)
(
−7 −3 5
𝐵 is skew- symmetric matrix.

Trace of matrix
The trace of square matrix 𝐴 is the sum of the diagonal entries of 𝐴 denoted 𝑻𝒓(𝐴).

Example

trace of 𝐵 from the previous example is 4 ∶ 𝑻𝒓(𝐵) = 𝟖 + (−𝟗) + 𝟓 = 𝟒.

Some definitions
 A matrix 𝐴 such that 𝐴2 = 𝐴 is called Idenpotant.
 If 𝐴 is a matrix and 𝑟 is the least positive integer such that 𝐴𝑟+1 = 𝐴, Then 𝐴 is called
periodic of period 𝑟.
 If 𝐴 matrix for which 𝐴𝑟 = 𝟎 the zero matrix. 𝐴 is called nilpotent.If 𝑟 is the least
positive integer for which this is true. 𝐴 is said to be nilpotent of order .

Ⅱ-4-3. Inverse of matrix and Elementary row Operations

Ⅱ-4-3.a. Determinant
Definition : Let 𝐴 = (𝑎𝑖𝑗 )𝑛×𝑛 be square matrix of order , then the number |𝐴| called determinant
of the matrix 𝐴.

1. Determinant of 𝟐 × 𝟐 :
𝑎 𝑏 𝑎 𝑏|
Let 𝐴 = ( ) then |𝐴| = | = 𝑎𝑑 − 𝑐𝑏
𝑐 𝑑 𝑐 𝑑
2. Determinant of 𝟑 × 𝟑:
𝑎11 𝑎12 𝑎13
Let 𝐵 = (𝑎21 𝑎22 𝑎23 )
𝑎31 𝑎32 𝑎33
𝑎11 𝑎12 𝑎13
𝑎22 𝑎23 𝑎21 𝑎23 𝑎21 𝑎22
Then |𝐵| = |𝑎21 𝑎22 𝑎23 | = 𝑎11 |𝑎 | − 𝑎12 | | + 𝑎13 |𝑎31 𝑎32 |
32 𝑎33 𝑎31 𝑎33
𝑎31 𝑎32 𝑎33
|𝐵| = 𝑎11 (𝑎22 𝑎33 − 𝑎23 𝑎32 ) − 𝑎12 (𝑎21 𝑎33 − 𝑎23 𝑎31 ) + 𝑎13 (𝑎21 𝑎32 − 𝑎22 𝑎31 )

Properties of Determinant
Let 𝐴 = (𝑎𝑖𝑗 )𝑛×𝑛 and 𝐵 = (𝑏𝑖𝑗 )𝑛×𝑛 are two matrices, 𝜆 ∈ ℝ

 det(𝐴𝐵) = det(𝐴) det(𝐵)


 det(𝐴′ ) = det(𝐴)
 det(𝜆𝐴) = 𝜆𝑛 det(𝐴

Singular and non-singular matrix


Definition Let 𝐴 be matrix of order 𝑛. 𝐴 is called a singular matrix is |𝐴| = 0 and non-singular
otherwise.
Ⅱ-4-3.b. Minor and cofactors
Definition Let 𝐴 = (𝑎𝑖𝑗 )𝑛×𝑛 be square matrix of order 𝑛. Then 𝑀𝑖𝑗 denoted a sub matrix of 𝐴 with
order (𝑛 − 1) × (𝑛 − 1) obtaind by deleting its 𝑖 𝑡ℎ row and 𝑗 𝑡ℎ column. The determinant |𝑀𝑖𝑗 | is
called the minor of the element 𝑎𝑖𝑗 of 𝐴.

The cofactor of 𝑎𝑖𝑗 denoted by 𝐴𝑖𝑗 and is equal to (−1)𝑖+𝑗 |𝑀𝑖𝑗 |

Theorem 1 (The laplace Expansion theorem)


Let 𝐴 = (𝑎𝑖𝑗 )𝑛×𝑛 be square matrix of order 𝑛. det(𝐴)is computed by
det(𝐴) = 𝑎𝑖1 𝐶𝑖1 + 𝑎𝑖2 𝐶𝑖2 + ⋯ + 𝑎𝑖𝑛 𝐶𝑖𝑛
or
det(𝐴) = 𝑎1𝑗 𝐶1𝑗 + 𝑎2𝑗 𝐶2𝑗 + ⋯ + 𝑎𝑛𝑗 𝐶𝑛𝑗
𝑖+𝑗 |𝑀 |
Where 𝐶𝑖𝑗= (−1) 𝑖𝑗

Example
1. compute the determinat of 𝐴 and the cofactor matrrix of 𝐴
1 0 1
𝐴 = ( 2 −2 4 )
−3 −1 −3
With use of the first row

−2 4 2 4 2 −2
𝑑𝑒𝑡(𝐴) = 1 | |− 0| | + (−3) | |
−1 −3 −3 −3 −3 −1

𝑑𝑒𝑡(𝐴) = 1((−2 × −3) − (4 × −1)) − 0((2 × −3) − (4 × −3)) + 1((2 × −1) − (−2 × −3))
𝑑𝑒𝑡(𝐴) = 2 ≠ 0

Or using the first column

−2 4 | 0 1| 0 1|
𝑑𝑒𝑡(𝐴) = 1 | −2| + (−3) |
−1 −3 −1 −3 −2 4
𝑑𝑒𝑡(𝐴) = 2 ≠ 0

𝐴 is non-singular matrix

−2 4 2 4 2 −2
| | −| | | |
−1 −3 3 −3 3 −1 10 −6 −8
0 1 1 1 1 0
𝑐𝑜𝑓𝑎𝑐𝑡𝑜𝑟 (𝐴) = −| | | | −| | = (−1 0 1)
−1 −3 3 −3 3 −1
0 1 1 1 1 0 2 −2 −2
( |−2 4
| −|
2 4
| |
2 −2
| )

2. Let 𝐴 = (𝑎 𝑏 ) the cofactor matrix of 𝐴


𝑐 𝑑
𝑑 −𝑏)
𝑐𝑜𝑓𝑎𝑐𝑡𝑜𝑟(𝐴) = (
−𝑐 𝑎
Ⅱ-4-3.c. Adjoin Matrix
Definition The transpose of the matrix cofactors of the element 𝑎𝑖𝑗 of 𝐴 denoted by 𝑎𝑑𝑗 𝐴 is called
adjoin of matrix 𝐴.

Example
find the adjoin matrix of the above Example
10 −1 2
( ) ′ (
𝐴𝑑𝑗 𝐴 = 𝑐𝑜𝑓𝑎𝑐𝑡𝑜𝑟 𝐴 = −6 0 −2)
−8 1 −2

Ⅱ-4-3.1. Inverse of matrix


Definition If 𝐴 and 𝐵 are two matrices such that 𝐴𝐵 = 𝐵𝐴 = 𝑰 then each is said to be inverse of the
other.
The inverse of 𝐴 is denoted by 𝐴−1
𝐴−1 𝐴 = 𝐴𝐴−1 = 𝑰

Theorem 2 (Existance of the inverse) :


The necessary and sufficient condition for square matrix 𝐴 to have an inverse is that
|𝐴 | ≠ 0
(that 𝐴 is non-singular).

Theorem 3 (uniqueness of the inverse) :


Inverse of a matrix if it exixts is unique

Properties of the inverse


 (𝐴−1 )−1 = 𝐴
1
 (𝜆𝐴)−1 = 𝐴−1 , 𝜆 ∈ ℝ∗
𝜆
 (𝐴−1 )′ = (𝐴′ )−1
 (𝐴𝐵)−1 = 𝐴−1 𝐵−1

Definition ( linear Group )

We call a linear Group of the order 𝑛 over 𝐾, denoted 𝑮𝑳𝒏 (𝕂), the group of all invertable
elements of 𝑴𝑛 (𝕂).

Theorem 4
1
If 𝐴 is non-singular matrix then 𝐴−1 = det(𝐴) 𝐴𝑑𝑗𝐴
Example
The inverse of the previous 𝐴 is
1
5 − 1
1 1 10 −1 2 2
𝐴−1 = 𝐴𝑑𝑗 𝐴 = (−6 0 −2) = −3 0 −1
det 𝐴 2 1
−8 1 −2
−4 1
( 2 )
1 3
Let 𝐵 = ( )
2 4
det(𝐵) = (1)(4) − (3)(2) = −2 ≠ 0
𝐵 is non-singular, then
3
1 4 −3 −2
𝐵−1 = ( )=( 2 )
−2 −2 1 1
1 −
2

Theorem 5
Let 𝐸, 𝐹 be 𝐾vector spases of the same dimension 𝑛 with respectivly the basis 𝐵 𝑎𝑛𝑑 𝐵′
Let 𝑇 ∈ 𝑳(𝑬, 𝑭), the linear map 𝑇 is linear is bijective if and only if 𝑴𝒂𝒕𝑩𝑩′ (𝑻) is non-
singular.

.Thereom 6(Similar Matrix)


Let , 𝐵 ∈ 𝑴𝑛 (𝕂), 𝐴 and 𝐵 are said to be a similar matrices if and only if ∃𝑃 ∈
𝑮𝑳𝒏 (𝕂), 𝐵 = 𝑃𝐴𝑃−1

Ⅱ-4-3.2.Elementary row Operations


Elementary transformations (operations)
Some operations on matrices called as elementary transformations. There are six types of elementry
transformations three of them are row transformation and other three of them are column
transformations there are as follows

1- Interchange of any two rows or columns


(for 𝑖 𝑡ℎ row and 𝑗𝑡ℎ denoted by 𝑅𝑖 ↔ 𝑅𝑗 )
2- Multiplication of elements of any row or column by non-zero number 𝜆.
(for 𝑖 𝑡ℎ row multiplied by number 𝑅𝑖 ↔ 𝜆𝑅𝑖 )
3- Multiplication to elements of any row or column by a scalar 𝜆 and addition of it to
the corespending elements of any other row or column
(for 𝑗𝑡ℎ row multiplied by 𝜆 + to the corresponding elements of 𝑖 𝑡ℎ row denoted by
𝑅𝑖 ↔ 𝑅𝑖 + 𝜆 𝑅𝑗 )
Definition (Equivalent matrix)
A matrix 𝐵 is said to be equivalent to amatrix 𝐴 if 𝐵 can be obtainded from 𝐴 by for forming finitely
many successive elementary transformations on matrix 𝐴.
Denoted 𝐴~𝐵

Example on how to compute the inverse using elementary transformations


The folloing method can be used with any non singulat square matrix
0 11
Let the matrix 𝐴 = ( )
−2 3
First we write the matrix in the form (𝐴|𝑰)
0 11 1 0
( | )
−2 3 0 1

We reduce(𝐴|𝑰) to (𝑰|𝑨−𝟏 ) by forming finitely many successive elemnetary transformations


−2 3 0 1
𝑅1 ↔ 𝑅2 ( | )
0 11 1 0
1 1
𝑅1 → − 𝑅1 3 0 −
2 (1 − | 2)
1 2 1
𝑅2 → 𝑅 0 1 11 0
11 2
3 1
3 1 0 22 − 2
𝑅1 → 𝑅1 + 𝑅2 ( | )
2 0 1 1
11 0
3 1

−𝟏
𝑨 = ( 22 2)
1
0
11

this method is of finding the invese is called Gauss Jordan Eliminatin.

Ⅱ-4-4.Rank of matrix
The rank of matrix 𝐴 is the dimension of the subvector space spanned by its columns.
To find the rank of matrix, we transform the matrix into its echelon form using the elementry
transformations.Then ,determine the rank by the non-zero rows.
Theorem
the rank of matrix 𝐴 of 𝑴𝒏 (𝕂) is equal 𝒏 if and only if 𝐴 is non-singular matrix

Example
1 1 0 0 1 0
𝑟𝑎𝑛𝑘 (−2 0 2) = 𝑟𝑎𝑛𝑘 (−2 0 2) 𝐶1 → 𝐶1 − 𝐶2
1 2 3 −1 2 3
0 1 0
= 𝑟𝑎𝑛𝑘 (−2 0 0) 𝐶3 → 𝐶3 + 𝐶1
−1 2 2
0 1 0 1
= 𝑟𝑎𝑛𝑘 −2 0 0) 𝑅3 → 𝑅3 − 𝑅2
(
2
0 2 2
0 1 0
= 𝑟𝑎𝑛𝑘 (−2 0 0) 𝑅3 → 𝑅3 − 2𝑅1
0 0 2
1 0 0
= 𝑟𝑎𝑛𝑘 (0 −2 0) 𝐶1 ↔ 𝐶2
0 2 2
=3
Ⅲ-System of Linear Equations

Ⅲ-1. Solution of the linear system 𝑨𝑿 = 𝒃


We now study how to find the solution of the system of 𝑚 linear equations in 𝑛 unknowns
Consider the system of equations in unknowns 𝑥1 , 𝑥2 , … … … , 𝑥𝑛 as
𝑎11 𝑥1 + 𝑎12 𝑥2 + … +𝑎1𝑛 𝑥𝑛 = 𝑏1
𝑎21 𝑥1 + 𝑎22 𝑥2 + … +𝑎2𝑛 𝑥𝑛 = 𝑏2
(𝑆) {
⋮ ⋮ ⋮ ⋮
𝑎𝑚1 𝑥1 + 𝑎𝑚2 𝑥2+ … +𝑎𝑚𝑛 𝑥𝑛 = 𝑏𝑚
Is called system of linear equations with 𝑛 unknowns 𝑥1 , 𝑥2 , … … … , 𝑥𝑛 .
If the constants 𝑏1 , 𝑏2 , … . 𝑏𝑚 are all zero the system is said to be homogeneous type 𝑏𝑗 ∈ 𝕂

The system can be put in the matrix form as 𝐴𝑋 = 𝑏


𝑎11 ⋯ 𝑎1𝑛 𝑥1 𝑏1
Where 𝐴 = ( ⋮ ⋱ ⋮ ) , 𝑋 = ( ⋮ ) ,𝑏 = ( ⋮ )
𝑎𝑚1 ⋯ 𝑎𝑚𝑛 𝑥𝑛 𝑏𝑚

The matrix 𝐴 = (𝑎𝑖𝑗 )𝑛×𝑛 is called coefficient matrix , the column matrix 𝑋 is called the matrix of the
unknowns and 𝑏 is called as column matrix of constants .

Ⅲ-2. Methods of solving system of linear equations


Solving the system (𝑆) means to find the set of solution ( the unknowns column matrix values)

Given any system of linear equations there are threepossible outcoms:


 The system has a unique solution
 The system has Infinitely many solutions
 The system has no solutions.

Ⅲ-2.1 –Method of Inversesion


Consider the matrix equation 𝐴𝑋 = 𝑏 where det 𝐴 ≠ 0 by multiplying 𝐴−1 ,we have
𝐴−1 (𝐴𝑋) = 𝐴−1 𝑏
𝑋 = 𝐴−1 𝑏

Example
Let (𝑆)
3𝑥 + 3𝑦 − 2𝑧 =5
(𝑆) { 2𝑦 + 7𝑧 =0
𝑥+𝑦−𝑧 =3
3 3 −2 5
Where 𝐴 = (0 2 7 ) and 𝑏 = (0). 𝐴 is non singular det 𝐴 = −2 ≠ 0
1 1 −1 3
9 1 25
− −
1 −9 1 25 2 2 2
𝐴−1 = − ( 7 −1 −21) = 7 1 21
2 −
−2 0 6 2 2 2
( 1 0 −3 )
9 1 25
− − 5 −15
2 2 2
𝑋 = 𝐴−1 𝑏 = 7 1 21 × (0) = ( 14 )
− 3 −4
2 2 2
( 1 0 −3 )
The system has unique solution (𝑥, 𝑦, 𝑧) = (−15,14, −4)

Ⅲ-2.2– Cramer’s rule method


Definition A system (𝑆) is said to be Cramer System if the matrix 𝐴 of the system (𝑆) is non-singular
square matrix.

Theorem( Cramer’s rule)


Let (𝑆) be a Cramer System in which 𝐴 is matrix of (𝑆) .The solution fon unknown
det(𝐴𝑗 )
𝑥1 , 𝑥2 , … … … , 𝑥𝑛 is given by : 𝑥𝑖 =
det(𝐴)
𝑡ℎ
Where 𝐴𝑗 is the matrix 𝐴 whose 𝑗 column has been replaced by the constants in 𝑏

Example the same previous system


By Crammer’s rule the solution is
5 3 −2 3 5 −2 3 3 5
|0 2 7 | |0 0 7| |0 2 0|
𝑥 = 3 1 −1 = −15, 𝑦= 1 3 −1 = 14, 𝑧 = 1 1 3 = −4.
−2 −2 −2

Ⅲ-2.3 –Gaussian Elimination method ( using Elamentary row operation)


The Gaussian Elimination method is more general then the previous ones. The matrix 𝐴 of the system
does not has to be non-singular square matrix, it can be of type (𝑚, 𝑛) (rectangule)
The Gaussain elimination is based on transforming the matrix of system 𝐴 with the vector 𝑏 into
(𝐴|𝑏) to some kind of upper tringular matrix form using the row opretions
𝑎11 ⋯ 𝑎1𝑛 𝑏1
(𝐴|𝑏) = ( ⋮ ⋱ ⋮ |⋮)
𝑎𝑚1 ⋯ 𝑎𝑚𝑛 𝑏𝑛
To
𝑎11 … … 𝑎1𝑛
… 𝑎′ 2𝑛 𝑏1
0 𝑎′ 22 |
(𝐴′|𝑏′) = ⋮ 0 𝑎′𝑟𝑛 𝑏′2
⋮ | ⋮
⋮ 𝑏′𝑛
( 0 0
0 )

Theorem
Any row operation used on matrix of the system with not change the corresponding
solution of the system
Gaussion Elimination steps
𝟏𝒔𝒕 𝒔𝒕𝒆𝒑 starting from the left , find the frst non-zero column. The first pivot is obtain by taking the
bigesst (with absolut value) element in the first column it’s called Gauss pivot (suppose it’s 𝑎11 as
example)

𝟐𝒏𝒅 𝒔𝒕𝒆𝒑 use row operations on 𝑅1 , 𝑅2 … 𝑅𝑚 to make the entiers below the pivot position equal zero
𝑎 𝑎𝑚1
by 𝑅2 − 𝑎21 𝑅1 , … , 𝑅𝑚 − 𝑎11
𝑅1
11

𝟑𝒓𝒅 𝒔𝒕𝒆𝒑 Ignoring the row containing the pivot position, repeat step 1 and step 2 with the remaining
rows .
Example

1- The previous system solution by Gauss method


3 3 −2 5
(𝐴|𝑏) = (0 2 7 |0) the pivot is 3 is in the first row
1 1 −1 3
3 3 −2 5
1 ( 0 2 7 |0 )
𝑅3 → 𝑅3 − 𝑅1 −1 4
3 0 0 3 3

3 3 −2 5
(0 2 7|0)
𝑅3 → −3𝑅3
0 0 1 −4
The system (𝑆)
3𝑥 + 3𝑦 − 2𝑧 =5
{ 2𝑦 + 7𝑧 =0
𝑧 = −4

We substitute 𝑧 = −4 in 2nd equation 2𝑦 + 7(−4) = 0 𝑤𝑒 𝑔𝑒𝑡 𝑦 = 14, 𝑡ℎ𝑒𝑛 𝑥 =


−15 𝑓𝑟𝑜𝑚 𝑡ℎ𝑒 1𝑠𝑡 𝑒𝑞𝑎𝑢𝑡𝑖𝑜𝑛.
2- Let the system (𝑆)
𝑥 − 𝑦 + 2𝑧 =4
𝑥 +𝑧 =6
{2𝑥 − 3𝑦 + 5𝑧
=4
3𝑥 + 2𝑦 − 𝑧 = 1
The matrix for the Gaussian Elimination
1 −1 2 4
(𝐴|𝑏) = (1 0 4 |6 )
2 −3 5 4
3 2 −1 1
Use the row operation
𝑅2 → 𝑅2 − 𝑅1 1 −1 2 4
𝑅3 → 𝑅3 − 2𝑅1 (0 1 −1| 2 )
0 −1 1 −4
𝑅4 → 𝑅4 − 3𝑅1
0 5 −7 −11
1 −1 2 4
𝑅3 → 𝑅3 + 𝑅2 (0 1 −1| 2 )
0 0 0 −2
0 5 −7 −11
The system (𝑆) has no solution in the 3𝑟𝑑 𝑟𝑜𝑤 0𝑥 + 0𝑦 + 0𝑧 = −2 which is contraduction

Reffrences
[1]Redouane Douaifia,(2023-2024), Lecteur note.University of Saad Dahleb Blida

[2]Zenkoufi .L.(2021-2022),Cous et Exercices corrigés,University of 8th Mai 1945 Guelma

[3]Frank Ayres,JR,Theory and Problems of Matrices Sohaum’s Outline Series

[4]Detta KB,Matrix and linear Algabra

[5]Vatssa BS, Theoryof matrices, scond reviseEdition

[6]CooryTMJA,AdvencedMathematics for Engineering

You might also like