Algebra 2 Lecteur Notes
Algebra 2 Lecteur Notes
Defintion :
Let 𝐺 be a set, we call a intern composition law on 𝐺 any function from 𝐺 × 𝐺 to 𝐺.
Internal composition law on 𝐺 is often denoted by (⋆).
Example :
Addition (+) is a Internal composition law on ℝ represented as follows :
+: ℝ × ℝ → ℝ
(𝑥, 𝑦) → 𝑥 + 𝑦
Defintion :
Let 𝐺 be a set and (⋆) be an operation on 𝐺. Then
∀x, y, z ∈ 𝐺 x ⋆ (y ⋆ z) = x ⋆ (y ⋆ z)
- ⋆ has an identity element (neutral element ) if :
∃e ∈ 𝐺, ∀x ∈ 𝐺 x ⋆ e = e ⋆ x = x
- Each x ∈ 𝐺 , an element x ′ ∈ 𝐺 is called the symmetric or inverse of x iff :
x ⋆ x′ = x′ ⋆ x = e
where e ∈ G is the identity element .
I.1.1- Groups :
Definition :
A group is a set 𝐺 combined with an internal composition law , denoted by ⋆ ,
such that :
- ⋆ has an identity element.
- Each element in 𝐺 has an inverse element in 𝐺 .
- ⋆ is associative.
if , in addition ⋆ is commutative then (𝐺,⋆ ) is called a commutative or abelien
group.
Examlpe :
1- (ℝ, + ) is commutative group because
a- ∀x, y, z ∈ ℝ x + (y + z) = x + (y + z).(associative)
b- The neutral element for (+)on ℝ is 0.
c- Each element 𝑥 of ℝ has a symmetric (−𝑥) for the intern law (+).
d- The + is commutaive on ℝ.
2- (ℝ, . ) is not group because 0 does not have an inverse element.
3- (ℤ, + ) is commutative group.
I.1.2- Ring :
Definition :
Let 𝐴 be a set combined with two internal laws denoted as (∗) and (δ).
We say that the tripe (𝐴,∗, δ ) is a ring if
- (𝐴,∗) is commutative group.
- For all 𝑥, 𝑦 , 𝑧 ∈ 𝐴
𝑥 𝛿 (𝑦 ∗ 𝑧) = (𝑥 𝛿 𝑦) ∗ (𝑥 𝛿 𝑧)
and (𝑥 ∗ 𝑦)𝛿 𝑧 = (𝑥 𝛿 𝑧) ∗ (𝑦 𝛿 𝑧)
which represents left and right distributivity
- 𝛿 is associative.
If, in addition, 𝛿 is commutative, we call (𝐴,∗, δ) a commutative ring. If δ has
neutral element, we refer to (𝐴,∗, δ) as a unitary ring
Example :
(ℝ, +, . ) , (ℂ, +, . ) are rings.
I.1.3- Field :
Definition :
Let 𝕂 be a set combiend with two internal laws (∗) and (𝛿). We say that the
triple (𝕂,∗, δ) is a field if :
- (𝕂,∗, δ) is unitary ring.
- (𝕂 − {𝑒}, δ) is group, where 𝑒 is the neutral element of (∗).
If, in addition , δ is commutative, then we refer to (𝕂,∗, δ) as commutative field.
Eample :
(𝕂, +, . ) is commutative field because :
- (𝕂, +, . ) Is ring.
- 1 is neutral element of the multiplication (. ) over ℝ.
then (ℝ, +, . ) is unitary ring
- Each element of (ℝ − {0}) has a symmtric for the law (. )
1
∀𝑥 ∈ ℝ∗ , ∃𝑦 ∈ ℝ∗ 𝑥. 𝑦 = 𝑦. 𝑥 = 1 ⇒ 𝑦 =
𝑥
- (. ) is associative
∀𝑥, 𝑦, 𝑧 ∈ ℝ 𝑥. (𝑦. 𝑧) = (𝑥. 𝑦). 𝑧
(+): E × E → E
(𝑥, 𝑦) → 𝑥 + 𝑦
(. ): 𝕂 × 𝐸 → 𝐸
(𝜆, 𝑥) → 𝜆. 𝑥
Definition :
A vector space over the field 𝕂(i.e 𝕂-vector space )is a triple (𝐸, +, . ) that satisfay tha following :
1- (𝐸, +) is commutative group.
2- ∀𝜆 ∈ 𝕂, ∀ 𝑢, 𝑣 ∈ 𝐸, 𝜆 . (𝑢 + 𝑣) = 𝜆 . 𝑢 + 𝜆 . 𝑣
3- ∀𝜆, 𝜇 ∈ 𝕂, ∀ 𝑢 ∈ 𝐸, (𝜆 + 𝜇) . 𝑢 = 𝜆 . 𝑢 + 𝜇 . 𝑢
4- ∀𝜆, 𝜇 ∈ 𝕂, ∀ 𝑢 ∈ 𝐸, (𝜆 . 𝜇) . 𝑢 = 𝜆 . ( 𝜇 . 𝑢)
5- ∀𝑢 ∈ 𝐸, 1𝕂 . 𝑢 = 𝑢
Remarks :
The element of vector space are called vectors and the element of 𝕂 are called scalars.
If 𝕂 = ℝ then E vector space is called « real vector space » and 𝕂 = ℂ then E is called
« complex vector space ».
Example :
(𝑨(ℝ, ℝ), +, . ) where 𝑨(ℝ, ℝ) is the set of the real value function from ℝ to ℝ is vector
space .
(ℝ[𝑋], +, . ) The set of polynomials with coeficients in ℝ is vector space over 𝕂 =
ℝ field.
Example :
The set 𝐹 = {(𝑥, 𝑦, 𝑧) ∈ ℝ3 ; 𝑥 = 𝑦 + 2𝑧} is vector subspace.
For 𝐸 vector space, {0𝐸 }is always vector subspace
Example : Let
𝐹 = {(𝑥, 0,0) ∈ ℝ3 | 𝑥 ∈ ℝ}
𝐺 = {(0,0, 𝑧) ∈ 𝑅3 | 𝑧 ∈ ℝ}
Then 𝐹 + 𝐺 = {(𝑥, 0, 𝑧) ∈ ℝ3 | 𝑥, 𝑧 ∈ ℝ}
1. 𝐸 = 𝐹1 + 𝐹2
2. 𝐹1 ∩ 𝐹2 = {0𝐸 }
and we write 𝑬 = 𝑭𝟏 ⊕ 𝑭𝟐
Example
Suppose 𝐹1 is the subspace of ℝ2 of those vector whose last coordinate equals 0,
Then 𝑅2 = F1 ⊕ F2 .
I.2- Vector spaces Bases
Example
- (17, −4,5) is linear combination of the list of vectors ((2,1, −3), (1, −2,4))
because i can find 𝜆1 = 6, 𝜆2 = 5 where (17, −4,2) = 6(2,1, −3) + 5(1, −2,4).
- (17, −4,5)is not a linear combination of ((2,1, −3), (1, −2,4))because we can not
find any 𝜆1 𝜆2 ∈ ℝ satisfy :
(17, −4,5) = 𝜆1 (2,1, −3) + 𝜆2 (1, −2,4)
17 = 2𝜆1 + 𝜆2
(𝑚𝑜𝑟𝑒 𝑒𝑥𝑎𝑐𝑡𝑙𝑦 { −4 = 𝜆1 − 2𝜆2 } has no solutions)
5 = −3𝜆1 − 4𝜆2
Definition : (Spans)
The set of all linear compinations of the list of vectors (𝑢1 , 𝑢2 … 𝑢𝑛 ) in 𝑬 is called span of
𝑢1 , 𝑢2 … 𝑢𝑛 denoted span(𝒖𝟏 , 𝒖𝟐 … 𝒖𝒏 ).In other words :
span(𝒖𝟏 , 𝒖𝟐 … 𝒖𝒏 ) = {𝝀𝟏 𝒖𝟏 + 𝝀𝟐 𝒖𝟐 + ⋯ + 𝝀𝒏 𝒖𝒏 |𝜆1 𝜆2 , … 𝜆𝑛 ∈ 𝕂}
Example
- (17, −4,2) ∈ 𝑠𝑝𝑎𝑛((2,1, −3), (1, −2,4))
- (17, −4,5) ∉ 𝑠𝑝𝑎𝑛((2,1, −3), (1, −2,4))
(2,1, −3) ∈ 𝑠𝑝𝑎𝑛((2,1, −3), (1, −2,4)) because ((2,1, −3) = 𝜆1 (2,1, −3) +
𝜆2 (1, −2,4) where 𝜆1 = 1 𝑎𝑛𝑑 𝜆2 = 0)
Properties
Let (𝑢1 , 𝑢2 … 𝑢𝑛 )be list of vectors in 𝑬 then
. ∀ 𝑖 ∈ {1, … , 𝑛} , 𝑢𝑖 ∈ 𝑠𝑝𝑎𝑛 (𝑢1 , 𝑢2 … 𝑢𝑛 )
. 𝑠𝑝𝑎𝑛 (𝑢1 , 𝑢2 … 𝑢𝑛 ) is a subspace of 𝑬
.If 𝐹 ⊂ 𝐸 is subspace such that 𝑢1 , 𝑢2 … 𝑢𝑛 ∈ 𝐹 then 𝑠𝑝𝑎𝑛 (𝑢1 , 𝑢2 … 𝑢𝑛 ) ∈ 𝐹
Definition :
If 𝑠𝑝𝑎𝑛 (𝑢1 , 𝑢2 … 𝑢𝑛 ) = 𝐸 then we say that (𝑢1 , 𝑢2 … 𝑢𝑛 ) spans 𝐸 and we call 𝐸 finit
dimensinal space .
Example
The list of usual vectors ((1,0,0,0), (0,1,0,0), (0,0,1,0), (0,0,0,1))𝑠𝑝𝑎𝑛𝑠 ℝ4
I.2-2-Linear Independance
Definition :
A list of vectors (𝑢1 , 𝑢2 … 𝑢𝑛 ) is called linearly independent if the only solution for 𝜆1 𝜆2 , … 𝜆𝑛 ∈ 𝕂 of
the equation 𝝀𝟏 𝒖𝟏 + 𝝀𝟐 𝒖𝟐 + ⋯ + 𝝀𝒏 𝒖𝒏 = 𝟎𝑬 is 𝜆1 = 𝜆2 = ⋯ = 𝜆𝑛 = 0
(𝑢1 , 𝑢2 … 𝑢𝑛 ) is called linearly dependent if there exist 𝜆1 𝜆2 , … 𝜆𝑛 ∈ 𝕂 not all zero, such that
𝝀𝟏 𝒖 𝟏 + 𝝀𝟐 𝒖 𝟐 + ⋯ + 𝝀𝒏 𝒖 𝒏 = 𝟎 𝑬 .
Example
. The list of vectors ((1,0,0), (0,1,0), (0,0,1)) is linearly independent in ℝ3
. The list of vectors ((2,3,1), (1, −1,2), (7,3,8)) is linearly dependent in ℝ3 ( because i can find 𝜆1 =
2, 𝜆2 = 3, 𝜆3 = −1 where 𝜆1 (2,3,1) + 𝜆2 (1, −1,2) + 𝜆3 (7,3,8) = (0,0,0) )
I.2-3-Bases
Definition :
A list of vectors (𝑢1 , 𝑢2 … 𝑢𝑛 ) is a basis for the finite-dimensional vector space 𝐸 if :
Example
- The list of vectors ((1,0,0),(0,1,0),(0,0,1)) of ℝ3 is a basis of ℝ3 (is called mostly
usual basis or standard basis of ℝ3 ).
- The list of vectors ((1,2),(1,1)) is a basis of ℝ2 .
Corallary : Every finite- dimonsional vector space has a basis
Teorem
Every linearly independent list of vectors is a finite-dimensional vector space 𝐸 can be
extended to a basis of 𝐸.
I.2-4-Dimension
Definition :
We call the length of any basis of 𝐸 the dimension of 𝐸, and we denote this by 𝒅𝒊𝒎𝑬
Theorem
Let 𝐸 be a vector space with 𝒅𝒊𝒎𝑬 = 𝒏
- If 𝐸= 𝑠𝑝𝑎𝑛 (𝑢1, 𝑢2 … 𝑢𝑛 ) then (𝑢1, 𝑢2 … 𝑢𝑛 ) is basis if 𝐸.
- If (𝑢1, 𝑢2 … 𝑢𝑛 ) is linearly independent in 𝐸 then (𝑢1, 𝑢2 … 𝑢𝑛 ) is basis of 𝐸.
Properties
If 𝐹 and 𝐺 are subspaces of finite-dimensional vector space 𝐸 then :
𝑑𝑖𝑚(𝐹 + 𝐺 ) = 𝑑𝑖𝑚(𝐹 ) + 𝑑𝑖𝑚(𝐺 ) − 𝑑𝑖𝑚(𝐹 ∩ 𝐺)
Definition :
A linear map from 𝐸 to 𝐹 is a function 𝑻: 𝐸 → 𝐹
That satisfy :
Remarks
The proporties 1 and 2 can be written as one 𝑇 (𝜆𝑢 + 𝑣) = 𝜆𝑇(𝑢) + 𝑇(𝑣) 𝑓𝑜𝑟 𝑎𝑙𝑙 𝜆 ∈
𝕂 , 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑢, 𝑣 ∈ 𝐸
- The set of all linear maps from 𝐸 to 𝐹 is denoted by 𝑳(𝑬, 𝑭). Moreover if 𝐸 = 𝐹,
then we write 𝑳(𝑬, 𝑬) = 𝑳(𝑬) and 𝑇 is called in 𝑳(𝑬) a linear operator on 𝐸
Example
- The zero map 𝟎: 𝐸 → 𝐹 mapping every element 𝑢 ∈ 𝐸 to 0 ∈ 𝐹 is linear
- The identety map 𝑰: (𝟏) 𝐸 → 𝐸 defined as 𝑰(𝑢) = 𝑢 is linear
ℝ2 → ℝ2
- Let 𝑇:
(𝑥, 𝑦) → (𝑥 − 2𝑦, 3𝑥 + 𝑦)
Check the linearty of 𝑇:
Then for (𝑥, 𝑦), (𝑥 ′ , 𝑦 ′ ) ∈ ℝ2 we have :
𝑇((𝑥, 𝑦) + (𝑥 ′ , 𝑦 ′ )) = 𝑇(𝑥 + 𝑥 ′ , 𝑦 + 𝑦 ′ )
= ((𝑥 + 𝑥 ′ ) − 2(𝑦 + 𝑦 ′ ), 3(𝑥 + 𝑥 ′ ) + (𝑦 + 𝑦 ′ )
= (𝑥 − 2𝑦, 3𝑥 + 𝑦) + (𝑥 ′ − 2𝑦 ′ , 3𝑥 ′ + 𝑦 ′ )
= 𝑇(𝑥, 𝑦) + 𝑇(𝑥 ′ , 𝑦 ′ )
Similarly for (𝑥, 𝑦) ∈ ℝ2 𝑎𝑛𝑑 𝜆 ∈ ℝ we have :
Example
We recall 𝑇 from perivous example
Theorem
Let 𝑇 ∈ 𝑳(𝑬, 𝑭) then :
𝑇 is injective ⇔ 𝐾𝑒𝑟𝑇 = 0𝐸
Theorem
Let 𝑇 ∈ 𝑳(𝑬, 𝑭), then 𝑟𝑎𝑛𝑔𝑒(𝑇) ( some mathematicians use notation 𝐼𝑚(𝑇)) is a
subspace of 𝐹
Definition :
A linear map 𝑇: 𝐸 → 𝐹 is called surjective if 𝑟𝑎𝑛𝑔𝑒(𝑇) = 𝐹
A linear map 𝑇: 𝐸 → 𝐹 is called bijective if 𝑇 is both injective and surjective
Theorem
Suppose 𝐸 is finite –dimensional vector space and 𝑇 ∈ 𝑳(𝑬, 𝑭) then 𝑟𝑎𝑛𝑔𝑒(𝑇) is finite-
dimonsional and 𝑑𝑖𝑚𝐸 = dim(𝑘𝑒𝑟𝑇) + dim(𝑟𝑎𝑛𝑔𝑒(𝑇))
Example
ℝ2 → ℝ2
𝑇:
(𝑥, 𝑦) → (𝑥 − 2𝑦, 3𝑥 + 𝑦)
Has ker(𝑇) = {0ℝ2 } and 𝑟𝑎𝑛𝑔𝑒(𝑇) = ℝ2
Then 𝑑𝑖𝑚ℝ2 = dim(𝑘𝑒𝑟𝑇) + dim(𝑟𝑎𝑛𝑔𝑒(𝑇)) = 0 + 2 = 2
Corallary
Let 𝑻 ∈ 𝑳(𝑬, 𝑭)
𝑇 is the linear map from 𝐸 to 𝐹, where the vectors 𝑇(𝑒1 ), 𝑇(𝑒2 ), … , 𝑇(𝑒𝑛 ) are vectors in 𝐹 and
𝐵′ is basis over 𝐹 (𝑓𝑜𝑟 𝑗 = 1, … , 𝑛)
the table
𝑇(𝑒1 ) 𝑇(𝑒2 ) … 𝑇(𝑒𝑛 )
… 𝜆1𝑛 𝑒1′
𝜆11 𝜆12
𝑒2′
𝜆21 𝜆22 … 𝜆2𝑛
⋮
⋮ ⋮ ⋮ ⋮
𝑒′
( 𝜆𝑚1 𝜆𝑚2 … 𝜆𝑚𝑛 ) 𝑚
Is called matrix of linear map 𝑇 related to the basis 𝐵 and 𝐵′ is noted mostly 𝑀𝑎𝑡𝐵𝐵′ (𝑇)
Remarks :
1. If 𝐸 = 𝐹, then 𝑀𝑎𝑡𝐵 (𝑇).
2. The matrix 𝑀𝑎𝑡𝐵𝐵′ (𝑇) depends on the basis 𝐵 vectors number and the basis 𝐵′ .
Example :
Let the usual basis of ℝ3 ∶ 𝐵 = {(1,0,0), (0,1,0), (0,0,1)} and 𝐵′ = {(1.0), (0,1)} the usual
basis of ℝ2 . Then
The matrix of 𝑇
2 −1 0
𝑀𝑎𝑡𝐵𝐵′ (𝑇) = ( )
1 1 1
2. let 𝑔 be a linear map over the ℝ3 [𝑋] into ℝ3 [𝑋] . where 𝑔 : 𝑝 → 𝑋 2 𝑝′′ + 𝑝(1) with the useal
basis of ℝ3 [𝑋] is given 𝐵 = {1, 𝑋, 𝑋 2 , 𝑋 3 }
Ⅱ-2-Matrix
Definition : we call a matrix of type (𝒎, 𝒏) (or dimensions × 𝒏 ) over 𝕂 field any numbers of the
form
𝑎11 𝑎12 … 𝑎1𝑛
𝑎21 𝑎22 … 𝑎2𝑛
𝐴=( ⋮ ⋮ ⋮ )
𝑎𝑚1 𝑎𝑚2 … 𝑎𝑚𝑛
The set of matrices over 𝐾 of type (𝒎, 𝒏) denoted by 𝑴𝒎,𝒏 (𝕂), if 𝑚 = 𝑛 the set
denoted by 𝑴𝒏 (𝕂).
If 𝑚 = 𝑛 the matrix 𝐴 = (𝑎𝑖𝑗 ) is a called square matrix
𝒂𝟏𝟏
𝒂
If 𝑛 = 1 then 𝐴 is called a column matrix (column vector) : 𝑨 = ( 𝟐𝟏 )
⋮
𝒂𝒎𝟏
If 𝑚 = 1 then 𝐴 is called a row matrix (row vector) : 𝑨 = (𝒂𝟏𝟏 𝒂𝟏𝟐 … 𝒂𝟏𝒏 )
The Zero matrix of the type (𝒎, 𝒏) is the matrix where all 𝑎𝑖𝑗 elements are equal to
0 it can be denoted by 𝟎𝒎×𝒏
𝟎 𝟎 𝟎
Example : 𝟎𝟑×𝟑 = (𝟎 𝟎 𝟎).
𝟎 𝟎 𝟎
Ⅱ-3-Operation on Matrices
𝑐𝑖𝑗 = ∑𝑛𝑘=1 𝑎𝑖𝑘 𝑏𝑘𝑗 = 𝑎𝑖1 𝑏1𝑗 + 𝑎𝑖2 𝑏2𝑗 + … … … + 𝑎𝑖𝑛 𝑏𝑛𝑗
Theorem
Let 𝐸, 𝐹 𝑎𝑛𝑑 𝐺 be vector-spaces. 𝐵1 , 𝐵2 , 𝐵3 are basis of these vector spaces
Let 𝑇 ∈ 𝑳(𝑬, 𝑭) and 𝑆 ∈ 𝑳(𝑭, 𝑮)
𝑀𝑎𝑡𝐵1𝐵3 (𝑆°𝑇) = 𝑀𝑎𝑡𝐵2𝐵3 (𝑆)𝑀𝑎𝑡𝐵1𝐵2 (𝑇)
Ⅱ-3-5. Transpose:
The transpose of matrix 𝐴 = (𝑎𝑖𝑗 )𝑚×𝑛 writen 𝐴′ (𝐴′ 𝑜𝑟 𝐴𝑡 ) is the matrix obtained by writing the rows
of 𝐴 in order as columns. That
𝐴 = (𝑎𝑗𝑖 )𝑛×𝑚
Example (Ⅱ-3) :
Let 𝐴, 𝐵 :
1 3 −4 1 0 3 1
𝐴 = (0 −5 2 ) ,𝐵 = (0 2 1 4)
2 1 3 0 −3 −1 6
1 3 −4 1 0 3 1
𝐴𝐵 = (0 −5 2 ) × (0 2 1 4)
2 1 3 0 −3 −1 6
𝐴𝐵
1 × 1 + 0 × 3 + 0 × −4 0 × 1 + 3 × 2 + (−3) × −4 1 × 3 + 3 × 1 + (−1) × −4 1 × 1 + 3 × 4 + 6 × −4
= (0 × 1 + 0 × −5 + 0 × 2 0 × 0 + (−5) × 2 + 2 × −3 0 × 3 + (−5) × 1 + (−1) × 2 0 × 0 + 4 × −5 + 6 × 2)
1×2+0×1+0×3 0 × 2 + 2 × 1 + 3 × −3 2 × 3 + 1 × 1 + (−1) × 3 2 × 1 + 1 × 4 + 6 × 3
1 18 10 −11
𝐴𝐵 = (0 −16 −7 −8 )
2 −7 4 24
𝐵𝐴 it’s impossible because 𝐴 𝑎𝑛𝑑 𝐵 are not confirmable.
1 0 0 1 0 2
𝐵′ = (0 2 −3), 𝐴′ = ( 3 −5 1)
3 1 −1
1 4 6 −4 2 3
1 0 3 1 1 18 10 −11
𝐵 + 𝐴𝐵 = (0 2 1 4 ) + (0 −16 −7 −8 )
0 −3 −1 6 2 −7 4 24
1+1 0 + 18 10 + 3 1 + (−11)
= (0 + 0 −16 + 2 1 + (−7) −8 + 4 )
2 + 0 −3 + (−7) 4 + (−1) 24 + 6
2 18 13 −10
𝐵 + 𝐴𝐵 = (0 −14 −6 −4 )
2 −10 3 30
𝐴(𝐵 + 𝐶 ) = 𝐴𝐵 + 𝐴𝐶
(𝐵 + 𝐶 )𝐴 = 𝐵𝐴 + 𝐶𝐴
Ⅱ-4-Square Mrtices
As we defined privously the square matrix is the matrix of type 𝑛 × 𝑛
𝑎11 𝑎12 … 𝑎1𝑛
𝑎21 𝑎22 … 𝑎2𝑛
𝐴=( ⋮ )
⋮ ⋱ ⋮
𝑎𝑛1 𝑎𝑛2 … 𝑎𝑛𝑛
𝐴𝑟 𝐴𝑠 = 𝐴𝑟+𝑠 = 𝐴𝑠 𝐴𝑟
1 0 0
Example : 𝑰𝟑 = (0 1 0)
0 0 1
5 7 4
Example : 𝐴 = (0 2 0)
0 0 3
−1 0 0
Example : 𝐴 = ( 4 2 0)
5 8 3
8 −2 7
Example : 𝐴 = (−2 −9 3)
7 3 5
𝐴 is a symmetric matrix.
A square matrix is 𝐴 = (𝑎𝑖𝑗 )𝑛×𝑛 is said to be a skew- symmetric if the elements 𝑎𝑖𝑗 of the Uper
tringular part are equal in lower tringular part −𝑎𝑖𝑗 . Example :
8 −2 7
𝐵 = 2 −9 3)
(
−7 −3 5
𝐵 is skew- symmetric matrix.
Trace of matrix
The trace of square matrix 𝐴 is the sum of the diagonal entries of 𝐴 denoted 𝑻𝒓(𝐴).
Example
Some definitions
A matrix 𝐴 such that 𝐴2 = 𝐴 is called Idenpotant.
If 𝐴 is a matrix and 𝑟 is the least positive integer such that 𝐴𝑟+1 = 𝐴, Then 𝐴 is called
periodic of period 𝑟.
If 𝐴 matrix for which 𝐴𝑟 = 𝟎 the zero matrix. 𝐴 is called nilpotent.If 𝑟 is the least
positive integer for which this is true. 𝐴 is said to be nilpotent of order .
Ⅱ-4-3.a. Determinant
Definition : Let 𝐴 = (𝑎𝑖𝑗 )𝑛×𝑛 be square matrix of order , then the number |𝐴| called determinant
of the matrix 𝐴.
1. Determinant of 𝟐 × 𝟐 :
𝑎 𝑏 𝑎 𝑏|
Let 𝐴 = ( ) then |𝐴| = | = 𝑎𝑑 − 𝑐𝑏
𝑐 𝑑 𝑐 𝑑
2. Determinant of 𝟑 × 𝟑:
𝑎11 𝑎12 𝑎13
Let 𝐵 = (𝑎21 𝑎22 𝑎23 )
𝑎31 𝑎32 𝑎33
𝑎11 𝑎12 𝑎13
𝑎22 𝑎23 𝑎21 𝑎23 𝑎21 𝑎22
Then |𝐵| = |𝑎21 𝑎22 𝑎23 | = 𝑎11 |𝑎 | − 𝑎12 | | + 𝑎13 |𝑎31 𝑎32 |
32 𝑎33 𝑎31 𝑎33
𝑎31 𝑎32 𝑎33
|𝐵| = 𝑎11 (𝑎22 𝑎33 − 𝑎23 𝑎32 ) − 𝑎12 (𝑎21 𝑎33 − 𝑎23 𝑎31 ) + 𝑎13 (𝑎21 𝑎32 − 𝑎22 𝑎31 )
Properties of Determinant
Let 𝐴 = (𝑎𝑖𝑗 )𝑛×𝑛 and 𝐵 = (𝑏𝑖𝑗 )𝑛×𝑛 are two matrices, 𝜆 ∈ ℝ
Example
1. compute the determinat of 𝐴 and the cofactor matrrix of 𝐴
1 0 1
𝐴 = ( 2 −2 4 )
−3 −1 −3
With use of the first row
−2 4 2 4 2 −2
𝑑𝑒𝑡(𝐴) = 1 | |− 0| | + (−3) | |
−1 −3 −3 −3 −3 −1
𝑑𝑒𝑡(𝐴) = 1((−2 × −3) − (4 × −1)) − 0((2 × −3) − (4 × −3)) + 1((2 × −1) − (−2 × −3))
𝑑𝑒𝑡(𝐴) = 2 ≠ 0
−2 4 | 0 1| 0 1|
𝑑𝑒𝑡(𝐴) = 1 | −2| + (−3) |
−1 −3 −1 −3 −2 4
𝑑𝑒𝑡(𝐴) = 2 ≠ 0
𝐴 is non-singular matrix
−2 4 2 4 2 −2
| | −| | | |
−1 −3 3 −3 3 −1 10 −6 −8
0 1 1 1 1 0
𝑐𝑜𝑓𝑎𝑐𝑡𝑜𝑟 (𝐴) = −| | | | −| | = (−1 0 1)
−1 −3 3 −3 3 −1
0 1 1 1 1 0 2 −2 −2
( |−2 4
| −|
2 4
| |
2 −2
| )
Example
find the adjoin matrix of the above Example
10 −1 2
( ) ′ (
𝐴𝑑𝑗 𝐴 = 𝑐𝑜𝑓𝑎𝑐𝑡𝑜𝑟 𝐴 = −6 0 −2)
−8 1 −2
We call a linear Group of the order 𝑛 over 𝐾, denoted 𝑮𝑳𝒏 (𝕂), the group of all invertable
elements of 𝑴𝑛 (𝕂).
Theorem 4
1
If 𝐴 is non-singular matrix then 𝐴−1 = det(𝐴) 𝐴𝑑𝑗𝐴
Example
The inverse of the previous 𝐴 is
1
5 − 1
1 1 10 −1 2 2
𝐴−1 = 𝐴𝑑𝑗 𝐴 = (−6 0 −2) = −3 0 −1
det 𝐴 2 1
−8 1 −2
−4 1
( 2 )
1 3
Let 𝐵 = ( )
2 4
det(𝐵) = (1)(4) − (3)(2) = −2 ≠ 0
𝐵 is non-singular, then
3
1 4 −3 −2
𝐵−1 = ( )=( 2 )
−2 −2 1 1
1 −
2
Theorem 5
Let 𝐸, 𝐹 be 𝐾vector spases of the same dimension 𝑛 with respectivly the basis 𝐵 𝑎𝑛𝑑 𝐵′
Let 𝑇 ∈ 𝑳(𝑬, 𝑭), the linear map 𝑇 is linear is bijective if and only if 𝑴𝒂𝒕𝑩𝑩′ (𝑻) is non-
singular.
Ⅱ-4-4.Rank of matrix
The rank of matrix 𝐴 is the dimension of the subvector space spanned by its columns.
To find the rank of matrix, we transform the matrix into its echelon form using the elementry
transformations.Then ,determine the rank by the non-zero rows.
Theorem
the rank of matrix 𝐴 of 𝑴𝒏 (𝕂) is equal 𝒏 if and only if 𝐴 is non-singular matrix
Example
1 1 0 0 1 0
𝑟𝑎𝑛𝑘 (−2 0 2) = 𝑟𝑎𝑛𝑘 (−2 0 2) 𝐶1 → 𝐶1 − 𝐶2
1 2 3 −1 2 3
0 1 0
= 𝑟𝑎𝑛𝑘 (−2 0 0) 𝐶3 → 𝐶3 + 𝐶1
−1 2 2
0 1 0 1
= 𝑟𝑎𝑛𝑘 −2 0 0) 𝑅3 → 𝑅3 − 𝑅2
(
2
0 2 2
0 1 0
= 𝑟𝑎𝑛𝑘 (−2 0 0) 𝑅3 → 𝑅3 − 2𝑅1
0 0 2
1 0 0
= 𝑟𝑎𝑛𝑘 (0 −2 0) 𝐶1 ↔ 𝐶2
0 2 2
=3
Ⅲ-System of Linear Equations
The matrix 𝐴 = (𝑎𝑖𝑗 )𝑛×𝑛 is called coefficient matrix , the column matrix 𝑋 is called the matrix of the
unknowns and 𝑏 is called as column matrix of constants .
Example
Let (𝑆)
3𝑥 + 3𝑦 − 2𝑧 =5
(𝑆) { 2𝑦 + 7𝑧 =0
𝑥+𝑦−𝑧 =3
3 3 −2 5
Where 𝐴 = (0 2 7 ) and 𝑏 = (0). 𝐴 is non singular det 𝐴 = −2 ≠ 0
1 1 −1 3
9 1 25
− −
1 −9 1 25 2 2 2
𝐴−1 = − ( 7 −1 −21) = 7 1 21
2 −
−2 0 6 2 2 2
( 1 0 −3 )
9 1 25
− − 5 −15
2 2 2
𝑋 = 𝐴−1 𝑏 = 7 1 21 × (0) = ( 14 )
− 3 −4
2 2 2
( 1 0 −3 )
The system has unique solution (𝑥, 𝑦, 𝑧) = (−15,14, −4)
Theorem
Any row operation used on matrix of the system with not change the corresponding
solution of the system
Gaussion Elimination steps
𝟏𝒔𝒕 𝒔𝒕𝒆𝒑 starting from the left , find the frst non-zero column. The first pivot is obtain by taking the
bigesst (with absolut value) element in the first column it’s called Gauss pivot (suppose it’s 𝑎11 as
example)
𝟐𝒏𝒅 𝒔𝒕𝒆𝒑 use row operations on 𝑅1 , 𝑅2 … 𝑅𝑚 to make the entiers below the pivot position equal zero
𝑎 𝑎𝑚1
by 𝑅2 − 𝑎21 𝑅1 , … , 𝑅𝑚 − 𝑎11
𝑅1
11
𝟑𝒓𝒅 𝒔𝒕𝒆𝒑 Ignoring the row containing the pivot position, repeat step 1 and step 2 with the remaining
rows .
Example
3 3 −2 5
(0 2 7|0)
𝑅3 → −3𝑅3
0 0 1 −4
The system (𝑆)
3𝑥 + 3𝑦 − 2𝑧 =5
{ 2𝑦 + 7𝑧 =0
𝑧 = −4
Reffrences
[1]Redouane Douaifia,(2023-2024), Lecteur note.University of Saad Dahleb Blida