0% found this document useful (0 votes)
10 views

Module 4_Linear Transformations and Orthogonality

This document covers the concepts of linear transformations and orthogonality within the context of mathematics, particularly for engineering applications. It introduces linear transformations, their properties, and various standard transformations such as reflection and rotation, while also discussing orthogonal vectors and the Gram-Schmidt orthogonalization process. The aim is to equip students with the knowledge to apply these concepts to solve engineering-related problems.

Uploaded by

22btrca057
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Module 4_Linear Transformations and Orthogonality

This document covers the concepts of linear transformations and orthogonality within the context of mathematics, particularly for engineering applications. It introduces linear transformations, their properties, and various standard transformations such as reflection and rotation, while also discussing orthogonal vectors and the Gram-Schmidt orthogonalization process. The aim is to equip students with the knowledge to apply these concepts to solve engineering-related problems.

Uploaded by

22btrca057
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Probability and Vector Spaces

Department of Mathematics
Jain Global campus, Jakkasandra Post, Kanakapura Taluk, Ramanagara District -562112

MODULE 4:
Linear Transformations and
Orthogonality
Department of Mathematics
FET-JAIN (Deemed-to-be University)
Table of Content

• Aim
• Introduction
• Objective
• Introduction of linear transformation
• Linear transformations, some standard transformations (Reflection, Projection, Rotation and Magnification or
Contraction)
• Orthogonal vectors, orthogonal and orthonormal bases, orthogonality of vectors using Projections
• Gram- Schmidt Orthogonalization.
• Solving probability-related problems
• Reference Links*
• Thank You
Aim

To familiarise students with some basic transformations and the fundamentals of linear

transformation. Also, educate the aspiring engineers on orthogonal vector methods,

orthogonal and orthonormal bases, the orthogonality of vectors using projections, and

Gram-Schmidt orthogonalization.
a. Discuss the basics concepts of linear transformation and its application in

engineering..

b. Employ orthogonality concepts to tackle engineering problems.

Objective c. Apply the Gram- Schmidt Orthogonalization to find the orthonormal basis.
Introduction
 Linear transformation, in mathematics, a rule for changing one geometric figure into another, using a formula with a

specified format.

 The format must be a linear combination, in which the original components are changed via the formula ax + by to

produce the coordinates of the transformed figure(Example the x and y coordinates of each point of the original figure)

 Examples include flipping the figure over the x or y axis, stretching or compressing it, and rotating it. Some such

transformations have an inverse, which undoes their effect.

 Linear transformations are useful because they preserve the structure of a vector space. So, many qualitative

assessments of a vector space that is the domain of a linear transformation may, under certain conditions, automatically

hold in the image of the linear transformation.

 Most graphical operations are represented by linear transformations.


Introduction
 A Linear Transformation, also known as a linear map, is a mapping of a function between two modules that preserves

the operations of addition and scalar multiplication. In short, it is the transformation of a function T.

 Linear transformations are often used in machine learning applications. They are useful in the modeling of 2D and

3D animation, where an objects size and shape needs to be transformed from one viewing angle to the next.

 An object can be rotated and scaled within a space using a type of linear transformations known as geometric

transformations, as well as applying transformation matrices.

 Apply linear transformations to map graphical objects (square, a triangle, or the letter L). It shows the standard matrix

of the given mapping and the image of the transformed object.


Linear Transformation:

Let U and V be two vector spaces over the field F. The mapping 𝑇: 𝑈 → 𝑉 is called as Linear Transformation. If

i. 𝑇 𝛼 + 𝛽 = 𝑇 𝛼 + 𝑇 𝛽 ∀ 𝛼, 𝛽 ∈ 𝑈

ii. 𝑇 𝑐𝛼 = 𝑐𝑇 𝛼 ∀ 𝑐 ∈ 𝐹, 𝛼 ∈ 𝑈

For all vectors u, v ∈ U and all scalars c. U is called the domain and V the codomain of T.

Eg. Every matrix transformation is a linear transformation.


THEOREM 1: A mapping 𝑇: 𝑈 → 𝑉 is a linear transformation if and only if 𝑇 𝑐1 𝛼 + 𝑐2 𝛽 = 𝑐1 𝑇 𝛼 +
𝑐2 𝑇 𝛽 ∀𝑐1 , 𝑐2 ∈ 𝐹 𝑎𝑛𝑑 𝛼, 𝛽 ∈ 𝑈.

Proof:- Let T is linear then by the definition,

𝑇 𝑐1 𝛼 + 𝑐2 𝛽 = 𝑇 𝑐1 𝛼 + 𝑇 𝑐2 𝛽

𝑇 𝑐1 𝛼 + 𝑐2 𝛽 = 𝑐1 𝑇 𝛼 + 𝑐2 𝑇 𝛽

Conversely,

If 𝑇 𝑐1 𝛼 + 𝑐2 𝛽 = 𝑐1 𝑇 𝛼 + 𝑐2 𝑇 𝛽 −−−−−−−→ (1)

Now particularly if 𝑐1 = 𝑐2 = 1 then

Equation (1) becomes 𝑇 𝛼 + 𝛽 = 𝑇 𝛼 + 𝑇 𝛽

Further if 𝑐1 = 𝑐 and 𝑐2 = 0 then

Equation (1) becomes 𝑇 𝑐𝛼 = 𝑐𝑇 𝛼


This proves T is linear
Hence the proof
THEOREM 2: If 𝑇: 𝑈 → 𝑉 is a linear mapping then
i. 𝑇 0 = 0′ where 0 and 0′ be zero vectors in U and V respectively.
ii. 𝑇 −𝛼 = −𝑇 𝛼 ∀ 𝛼 ∈ 𝑈.
iii. 𝑇 𝑐1 𝛼1 + 𝑐2 𝛼2 +−−−−− − + 𝑐𝑛 𝛼𝑛 = 𝑐1 𝑇 𝛼1 + 𝑐2 𝑇 𝛼2 + −−−− − + 𝑐𝑛 𝑇(𝛼𝑛 ) where
𝑐1 , 𝑐2 … … … … 𝑐𝑛 ∈ 𝐹 and 𝛼1 , 𝛼2 … … … … 𝛼𝑛 ∈ 𝑈 (In engineering and Physics, it is referred to as superposition
principle).
Proof:-
i. Consider 𝑇 𝛼 + 0 = 𝑇 𝛼 + 𝑇 0
𝑇 𝛼 + 0′ = 𝑇 𝛼 + 𝑇(0)
0′ = 𝑇(0) [By left cancellation law]
ii. Consider 𝑇 𝛼 + (−𝛼) = 𝑇 𝛼 + 𝑇 −𝛼
⇒ 𝑇 0 = 𝑇 𝛼 + 𝑇 −𝛼
⇒ 0′ = 𝑇 𝛼 + 𝑇 −𝛼
⇒ 𝑇 −𝛼 in the inverse of 𝑇 𝛼
That is., 𝑇 −𝛼 = −𝑇 𝛼
iii. To prove the (iii) results we use mathematical induction
[Step1:- Basic step and Step 2:- Induction step]
Given statement
𝑃 𝑛 = 𝑇 𝑐1 𝛼1 + 𝑐2 𝛼2 +−−−−− − + 𝑐𝑛 𝛼𝑛 = 𝑐1 𝑇 𝛼1 + 𝑐2 𝑇 𝛼2 + −−−− − + 𝑐𝑛 𝑇(𝛼𝑛 )
Basic step:- Check the result for 𝑛 = 1
i.e., basic step 𝑇 𝑐1 𝛼1 = 𝑐1 𝑇 𝛼1
The result is true for 𝑛 = 1
Induction Step:-
Assume the result is true for k i.e., 𝑛 = 𝑘
𝑇 𝑐1 𝛼1 + 𝑐2 𝛼2 +−−−−− − + 𝑐𝑘 𝛼𝑘 = 𝑐1 𝑇 𝛼1 + 𝑐2 𝑇 𝛼2 + −−−− − + 𝑐𝑘 𝑇(𝛼𝑘 )
Now we have to prove this results is true for 𝑘 + 1 i.e., 𝑛 = 𝑘 + 1
𝑇 (𝑐1 𝛼1 + 𝑐2 𝛼2 +−−−−− − + 𝑐𝑘 𝛼𝑘 ) + 𝑐𝑘+1 𝛼𝑘+1 = 𝑇(𝑐1 𝛼1 + 𝑐2 𝛼2 +−−−−− − + 𝑐𝑘 𝛼𝑘 ) + 𝑇(𝑐𝑘+1 𝛼𝑘+1 )
= 𝑐1 𝑇 𝛼1 + 𝑐2 𝑇 𝛼2 + −−−− − + 𝑐𝑘 𝑇(𝛼𝑘 ) + 𝑐𝑘+1 𝑇(𝛼𝑘+1 )
Thus the result is true for 𝑘 + 1 i.e., 𝑛 = 𝑘 + 1.
Hence the result is true for ∀ 𝑛
Problems:
1. Define 𝑻: 𝑽𝟑 (𝑹) → 𝑽𝟑 𝑹 by 𝑻 𝒙𝟏 , 𝒙𝟐 , 𝒙𝟑 = 𝟎, 𝒙𝟐 , 𝒙𝟑
show that T is a linear transformation.
Solution: Let 𝛼 = 𝑥1 , 𝑥2 , 𝑥3 and β = 𝑦1 , 𝑦2 , 𝑦3 ∈ 𝑉3 𝑅
𝑇 𝛼 + 𝛽 = 𝑇 𝑥1 , 𝑥2 , 𝑥3 + 𝑦1 , 𝑦2 , 𝑦3
= 𝑇 𝑥1 + 𝑦1 , 𝑥2 + 𝑦2 , 𝑥3 + 𝑦3
= 0, 𝑥2 + 𝑦2 , 𝑥3 + 𝑦3
= 0, 𝑥2 , 𝑥3 + 0, 𝑦2 , 𝑦3
= 𝑇 𝑥1 , 𝑥2 , 𝑥3 + 𝑇 𝑦1 , 𝑦2 , 𝑦3
𝑇 𝛼+𝛽 =𝑇 𝛼 +𝑇 𝛽
Now consider
𝑇 𝑐𝛼 = 𝑇 𝑐 𝑥1 , 𝑥2 , 𝑥3
= 𝑇(𝑐𝑥1 , 𝑐𝑥2 , 𝑐𝑥3 )
= 0, 𝑐𝑥2 , 𝑐𝑥3
= 𝑐(0, 𝑥2 , 𝑥3 )
= 𝑐𝑇 𝑥1 , 𝑥2 , 𝑥3
𝑇 𝑐𝛼 = 𝑐𝑇 𝛼
Hence T is a linear transformation
2. If 𝑻: 𝑽𝟏 (𝑹) → 𝑽𝟑 𝑹 is defined by 𝑻 𝒙 = 𝒙, 𝒙𝟐 , 𝒙𝟑 verify where T is a linear or not.

Solution:- Let ∀ 𝑥 , 𝑦 ∈ 𝑉1 (𝑅)


𝑇 𝑥+𝑦 = 𝑥 + 𝑦 , 𝑥 + 𝑦 2, 𝑥 + 𝑦 3 ------ (1)
𝑇 𝑥 + 𝑇 𝑦 = 𝑥, 𝑥 2 , 𝑥 3 + 𝑦, 𝑦 2 , 𝑦 3
𝑇 𝑥 + 𝑇 𝑌 = (𝑥 + 𝑦, 𝑥 2 + 𝑦 2 , 𝑥 3 + 𝑦 3 ) ------ (2)
Compare (1) and (2)
𝑇 𝑥 + 𝑦 ≠ 𝑇 𝑥 + 𝑇(𝑦)
∴ T is not linear {Because 𝑥 + 𝑦 2 ≠ 𝑥 2 + 𝑦 2 and 𝑥 + 𝑦 3 ≠ 𝑥 3 + 𝑦3}
Note:-
1. 𝑥, 𝑦 = 𝑐1 1, 0 + 𝑐2 0, 1
2. 𝑥, 𝑦 = 𝑥 1, 0 + 𝑦 0, 1
3. Find the linear transformation f: 𝑹𝟐 → 𝑹𝟐 such that 𝒇 𝟏, 𝟎 = 𝟏, 𝟏 𝒂𝒏𝒅 𝒇 𝟎, 𝟏 = −𝟏, 𝟐 .

Solution:- Consider 𝑥, 𝑦 = 𝑥 1, 0 + 𝑦 0, 1

𝑓 𝑥, 𝑦 = 𝑓 𝑥 1, 0 + 𝑦 0, 1

= 𝑥𝑓 1, 0 + 𝑦𝑓 0, 1

= 𝑥 1, 1 + 𝑦 −1, 2

= 𝑥, 𝑥 + −𝑦, 2𝑦 https://www.geogebra.org/m/TSQk4yCY

𝑓 𝑥, 𝑦 = 𝑥 − 𝑦, 𝑥 + 2𝑦
4. Find the linear transformation f: 𝑹𝟐 → 𝑹𝟐 such that 𝒇 𝟏, 𝟏 = 𝟎, 𝟏 𝒂𝒏𝒅 𝒇 −𝟏, 𝟏 = 𝟑, 𝟐 .
Solution:- Consider 𝑥, 𝑦 = 𝑐1 1, 1 + 𝑐2 −1, 1 ----- (1)
𝑥, 𝑦 = 𝑐1 − 𝑐2 , 𝑐1 +𝑐2
𝑥+𝑦 (𝑦 − 𝑥)
Now 𝑐1 − 𝑐2 = 𝑥 𝑎𝑛𝑑 𝑐1 + 𝑐2 = 𝑦 = 0, + 3 , (𝑦 − 𝑥)
2 2
𝑥+𝑦 𝑦−𝑥
Adding 𝑐1 = and 𝑐2 = (𝑦 − 𝑥) (𝑥 + 𝑦)
2 2
= 0+3 , + (𝑦 − 𝑥)
Equation (1) becomes 2 2
𝑥+𝑦 𝑦−𝑥 (𝑦 − 𝑥) (3𝑦 − 𝑥)
𝑥, 𝑦 = 1, 1 + −1, 1 𝑓 𝑥, 𝑦 = 3 ,
2 2 2 2
Applying f on both sides we get
𝑥+𝑦 𝑦−𝑥
𝑓 𝑥, 𝑦 = 𝑓[ 1, 1 + −1, 1 ]
2 2
𝑥+𝑦 𝑦−𝑥
𝑓 𝑥, 𝑦 = 𝑓 1, 1 + 𝑓 −1, 1
2 2

𝑥+𝑦 𝑦−𝑥
𝑓 𝑥, 𝑦 = 0, 1 + 3, 2
2 2

𝑥+𝑦 (𝑦 − 𝑥) (𝑦 − 𝑥)
= 0, + 3 ,2
2 2 2
5. If 𝑻: 𝑹𝟐 → 𝑹𝟐 is a linear transformation such that 𝑻 𝟏, 𝟎 = 𝟏, 𝟏 𝒂𝒏𝒅 𝑻 𝟎, 𝟏 = −𝟏, 𝟐 . Show that T maps
the square with vertices 𝟎, 𝟎 , 𝟏, 𝟎 , 𝟏, 𝟏 𝒂𝒏𝒅 𝟎, 𝟏 into a parallelogram.
Solution:- Consider 𝑥, 𝑦 = 𝑥 1, 0 + 𝑦 0, 1
𝑇 𝑥, 𝑦 = 𝑇 𝑥 1, 0 + 𝑦 0, 1 OR 𝑥, 𝑦 = 𝑐1 1, 0 + 𝑐2 0, 1
= 𝑥𝑇 1, 0 + 𝑦𝑇 0, 1 𝑥, 𝑦 = 𝑐1 , 0 + 0, 𝑐2
= 𝑥 1, 1 + 𝑦 −1, 2 𝑥, 𝑦 = 𝑐1 , 𝑐2
= 𝑥, 𝑥 + −𝑦, 2𝑦 Now 𝑥 = 𝑐1 𝑎𝑛𝑑 𝑦 = 𝑐2
𝑇 𝑥, 𝑦 = 𝑥 − 𝑦, 𝑥 + 2𝑦 ∴ 𝑥, 𝑦 = 𝑥 1, 0 + 𝑦 0, 1
Now 𝑇 0, 0 = 0, 0
𝑇 1, 0 = 1, 1
𝑇 1, 1 = 0, 3
𝑇 0, 1 = −1, 2
5. If 𝑻: 𝑹𝟐 → 𝑹𝟐 is a linear transformation such that 𝑻 𝟏, 𝟎 = 𝟏, 𝟏 𝒂𝒏𝒅 𝑻 𝟎, 𝟏 = −𝟏, 𝟐 . Show that T maps
the square with vertices 𝟎, 𝟎 , 𝟏, 𝟎 , 𝟏, 𝟏 𝒂𝒏𝒅 𝟎, 𝟏 into a parallelogram.
Solution:- Consider 𝑥, 𝑦 = 𝑥 1, 0 + 𝑦 0, 1
𝑇 𝑥, 𝑦 = 𝑇 𝑥 1, 0 + 𝑦 0, 1 OR 𝑥, 𝑦 = 𝑐1 1, 0 + 𝑐2 0, 1
= 𝑥𝑇 1, 0 + 𝑦𝑇 0, 1 𝑥, 𝑦 = 𝑐1 , 0 + 0, 𝑐2
= 𝑥 1, 1 + 𝑦 −1, 2 𝑥, 𝑦 = 𝑐1 , 𝑐2
= 𝑥, 𝑥 + −𝑦, 2𝑦 Now 𝑥 = 𝑐1 𝑎𝑛𝑑 𝑦 = 𝑐2
𝑇 𝑥, 𝑦 = 𝑥 − 𝑦, 𝑥 + 2𝑦 ∴ 𝑥, 𝑦 = 𝑥 1, 0 + 𝑦 0, 1
Now 𝑇 0, 0 = 0, 0
𝑇 1, 0 = 1, 1
𝑇 1, 1 = 0, 3
𝑇 0, 1 = −1, 2 R(0,3)

S(-1,2)
(0,1) (1,1) Q(1,1)
Through
Linear Transformation

(0,0) (1,0) P(0,0)


To Prove output is parallegram (we known that the diagonals bisects each other)

𝑥1 +𝑥3 𝑦1 +𝑦3 3
Mid point of 𝑃𝑅 = , = 0, 2 -------- (1)
2 2

𝑥2 +𝑥4 𝑦2 +𝑦4 3
Mid point of 𝑄𝑆 = 2
, 2
= 0, 2 -------- (2)

From equation (1) and (2) it is clear that output is a parallegram through the liner transformation T.
6. If 𝑻: 𝑽𝟑 (𝑹) → 𝑽𝟐 𝑹 is defined by 𝑻 𝒙, 𝒚, 𝒛 = 𝒙 + 𝒚, 𝒚 + 𝒛 show that T is a linear transformation.
Solution: Let 𝛼 = 𝑥1 , 𝑦1 , 𝑧1 and β = 𝑥2 , 𝑦2 , 𝑧2
𝑇 𝛼 + 𝛽 = 𝑇 𝑥1 , 𝑦1 , 𝑧1 + 𝑥2 , 𝑦2 , 𝑧2
= 𝑇 𝑥1 + 𝑥2 , 𝑦1 + 𝑦2 , 𝑧1 + 𝑧2
= 𝑥1 + 𝑥2 + 𝑦1 + 𝑦2 , 𝑦1 + 𝑦2 + 𝑧1 + 𝑧2
= 𝑥1 + 𝑦1 , 𝑦1 + 𝑧1 + 𝑥2 +𝑦2 , 𝑦2 +𝑧2
= 𝑇 𝑥1 , 𝑦1 , 𝑧1 + 𝑇 𝑥2 , 𝑦2 , 𝑧2
𝑇 𝛼+𝛽 =𝑇 𝛼 +𝑇 𝛽
Now consider
𝑇 𝑐𝛼 = 𝑇 𝑐 𝑥1 , 𝑦1 , 𝑧1
= 𝑇(𝑐𝑥1 , 𝑐𝑦1 , 𝑐𝑧1 )
= 𝑐𝑥1 + 𝑐𝑦1 , 𝑐𝑦1 + 𝑐𝑧1
= 𝑐 𝑥1 + 𝑦1 , 𝑦1 + 𝑧1
= 𝑐 𝑇(𝑥1 , 𝑦1 + 𝑧1 )
𝑇 𝑐𝛼 = 𝑐𝑇 𝛼
Hence T is a linear transformation
7. Examine whether the transformation 𝑻: 𝑹𝟐 → 𝑹𝟑 defined by 𝑻 𝒙, 𝒚 = (𝒙 − 𝒚, 𝒚, 𝒙 + 𝒚) is a linear
transformation or not.
Solution: Given that 𝑇: 𝑅2 → 𝑅3 and 𝑇 𝑥, 𝑦 = (𝑥 − 𝑦, 𝑦, 𝑥 + 𝑦)
Let Let 𝛼 = 𝑥1 , 𝑦1 and β = 𝑥2 , 𝑦2
𝑇 𝛼 + 𝛽 = 𝑇 𝑥1 , 𝑦1 + 𝑥2 , 𝑦2
= 𝑇 𝑥1 + 𝑥2 , 𝑦1 + 𝑦2
= 𝑥1 + 𝑥2 − 𝑦1 − 𝑦2 , 𝑦1 + 𝑦2 , 𝑥1 + 𝑥2 + 𝑦1 + 𝑦2
= 𝑥1 − 𝑦1 , 𝑦1 , 𝑥1 + 𝑥2 + 𝑥2 +𝑦2 , 𝑦2 , 𝑥2 + 𝑦2
= 𝑇 𝑥1 , 𝑦1 + 𝑇 𝑥2 , 𝑦2
𝑇 𝛼+𝛽 =𝑇 𝛼 +𝑇 𝛽
Now consider
𝑇 𝑐𝛼 = 𝑇 𝑐 𝑥1 , 𝑦1
= 𝑇(𝑐𝑥1 , 𝑐𝑦1 )
= 𝑐𝑥1 − 𝑐𝑦1 , 𝑐𝑦1 , 𝑐𝑥1 +𝑐𝑥1
= 𝑐 𝑥1 − 𝑦1 , 𝑦1 , 𝑥1 + 𝑥2
= 𝑐 𝑇(𝑥1 , 𝑦1 )
𝑇 𝑐𝛼 = 𝑐𝑇 𝛼
Hence T is a linear transformation
Matrix Transformation:
It is a function T: 𝑹𝒏 → 𝑹𝒎 defined by T 𝑥 = 𝐴𝑥 where A is (𝑚 × 𝑛) matrix and x is n – vector
then the matrix product 𝐴𝑥 is m – vector.
The vector 𝑓 𝑥 in 𝑹𝒎 is called the image of x, and the set of all images of the vectors in 𝑹𝒏 is called the range of T.
𝟐 𝟒
Example:- 1. If f be the matrix transformation defined by T 𝒙 = 𝒙.
𝟑 𝟏
2
The image of x = is
−1
2 2 4 2
T =
−1 3 1 −1
4 −4
=
6 −1
2 0
T =
−1 5
1. Find the matrix of linear transformation 𝑻: 𝑽𝟑 𝑹 → 𝑽𝟐 𝑹 defined by 𝑻 𝒙, 𝒚, 𝒛 = 𝒙 + 𝒚, 𝒚 + 𝒛 .

Solution: Given that 𝑇: 𝑉3 𝑅 → 𝑉2 𝑅

𝑇 𝑥, 𝑦, 𝑧 = 𝑥 + 𝑦, 𝑦 + 𝑧

Let 𝛽1 = 1,0,0 , 0,1,0 , 0,0,1

Now 𝑒1 , 𝑒2 and 𝑒3 be the standard basis vectors in 𝑉3 𝑅

∴ 𝑇 𝑥, 𝑦, 𝑧 = 𝑥 + 𝑦, 𝑦 + 𝑧 -> (1)

𝑇 𝑒1 = 𝑇 1,0,0 = 1, 0

𝑇 𝑒2 = 𝑇 0,1,0 = 1, 1

𝑇 𝑒3 = 𝑇 0,0,1 = 0, 1

1 1 0
Matrix of Linear transformation is
0 1 1
2. Find the matrix for 𝑻: 𝑽𝟐 → 𝑽𝟑 is a linear transformation such that 𝑻 𝒙𝟏 , 𝒙𝟐 = 𝒙𝟏 + 𝒙𝟐 , 𝟐𝒙𝟏 − 𝒙𝟐 , 𝟕𝒙𝟐 .
Solution: Given that 𝑇: 𝑉2 → 𝑉3
𝑻 𝒙𝟏 , 𝒙𝟐 = 𝒙𝟏 + 𝒙𝟐 , 𝟐𝒙𝟏 − 𝒙𝟐 , 𝟕𝒙𝟐
Let 𝛽1 = 1,0 , 0,1
Now 𝑒1 , 𝑒2 and 𝑒3 be the standard basis vectors in 𝑉2
∴ 𝑻 𝒙𝟏 , 𝒙𝟐 = 𝒙𝟏 + 𝒙𝟐 , 𝟐𝒙𝟏 − 𝒙𝟐 , 𝟕𝒙𝟐 -> (1)
𝑇 𝑒1 = 𝑇 1,0 = 1, 2,0
𝑇 𝑒2 = 𝑇 0,1 = 1, −1,7
1 1
Matrix of Linear transformation is 2 −1
0 7
3. Find the matrix of linear transformation 𝑇: 𝑉3 → 𝑉2 with respect to standard basis defined by

𝑇 𝑥, 𝑦, 𝑧 = 𝑧 − 2𝑦, 𝑥 + 2𝑦 − 𝑧 .[Homework]

4. Find the matrix of linear transformation 𝑇: 𝑉2 𝑅 → 𝑉2 𝑅 defined by 𝑇 𝑥, 𝑦 = 𝑥, −𝑦 .[Homework]


Standard Transformations:
1. Dilation/Magnification and Contraction:

A dilation is a transformation that produces an image that is the same shape as the original, but
is a different size. In simple words, dilation means, it just re sizes the given figure without
rotating or anything else.
  x  x
Consider the transformation T      r   , where r is a positive scalar, T maps every
  y   y
point in R 2 into a point r times as far from the origin.
If r  1, T moves points away from the origin and is called a dilation of factor r . If
0  r  1, T moves points closer to the origin and is called a contraction of factor r .
  x   r 0  x 
In matrix form, T        
  y  0 r   y 
Ex:
  x   3 0  x   x
When r  3, T           3    three times away from the origin.
  y   0 3  y   y
1 
2 0
1   x   x 1  x 1
When r  , T          
2  distance reduces/contrast from the origin.
2   y  0 1   y  y 2

 2
2. Reflection:
It is a transformation which produces a mirror image of an object. The mirror image
can be either about x-axis or y-axis.
  x   x 
Consider the transformation T        , T maps every point in R 2 into its mirror
  y    y 
image in the x- axis and is called reflection.
  x  1 0   x
In matrix form, T       0 1  y 
  y    
  x    x 
Similarly, Consider the transformation T        , T maps every point in R 2
  y   y 
  x    1 0   x 
into its mirror image in the y- axis . In matrix form, T         y
  y    0 1  

Ex.
  3   1 0   3  3 
T          is the image about x- axis (in 4th quadrant) &
  2  0 1  2   2 
  3    1 0   3   3
T          is the image about y-axis ( in 2nd quadrant).
  2   0 1   2  2 
3. Rotation:

Consider the rotation transformation T about the origin


 x
through an angle  , T maps a point A   into the
 y
 x 
point B   . From the figure,
 y
x  OC  r cos(   )
 r cos  cos   r sin  sin 
 x cos   y sin  x  r cos  & y  r sin 
y   BC  r sin(   )
 r sin  cos   r cos  sin 
 x sin   y cos  x  r cos  & y  r sin 
In matrix form
 x   cos   sin    x 
 y   sin 
   cos    
  y
Thus, a rotation through an angle  is described by
  x   cos   sin    x 
T    
  y   sin  cos    
  y

Note:
a. If  is positive, rotation takes place in counter clockwise direction
b. If  is negative, rotation takes place in clockwise direction
Ex:

5 
Consider the point   which is rotated 90o counter
3
clockwise direction, then
 5   cos 90  sin 90  5  3
T         
 3   sin 90 cos 90  3  5 

5 
Consider the point   which is rotated either 180o
3
or -180o (clockwise or counter clockwise), then
 5   cos180  sin180  5  5
T      3   3
  
3 sin180 cos180    
4. Projection:
Consider the projection of the vector onto
the point   Line . Let A and B  be the
points of projections of A and B
respectively. Then the family of rotations
can be represented in matrix form as
 cos 2  cos  sin  
P 
cos  sin  sin  
2
𝟎 −𝟏 𝒙𝟏 −𝒙𝟐
1. Define a linear transformation 𝑻: 𝑹𝟐 → 𝑹𝟐 defined by 𝑻(𝒙) = 𝒙𝟐 = 𝒙𝟏 ,
𝟏 𝟎
𝟒 𝟐 𝟔
Find the images under T of 𝒖 = , 𝒗= and 𝒖 + 𝒗 = ?
𝟏 𝟑 𝟒
0 −1 4 −1 0 −1 2 −3
Solution: 𝑇 (𝑢 ) = = and 𝑇 (𝑣 ) = = .
1 0 1 4 1 0 3 2
0 −1 6 −4
𝑇 (𝑢 + 𝑣 ) = =
1 0 4 6

Figure: Rotation Transformation


It is obvious that 𝑇 (𝑢 + 𝑣 ) = 𝑇 (𝑢 ) + 𝑇(𝑣 ). Here, T rotates u, v, u+v counter-clockwise about
the origin through 90°. T transforms the entire parallelogram determined by u and v into the
one determined by T(u) and T(v).
Ex. 2 Determine the new point after applying the transformation to the given point. Use
the induced matrix associated with each transformation to find the new point.
(a) 𝒙 = (𝟓, 𝟎) Rotated 90o in the counter-clockwise direction.
(b) 𝒙 = (𝟎, 𝟓, 𝟏) Rotated 90o in the counter-clockwise direction about the y-axis.
Solution:
cos 𝜃 −sin 𝜃
a. We know that the induced matrix in 𝑅 2 is
sin 𝜃 cos 𝜃
The original point is 𝑥 = (5,0) and it is rotated 90o in the counter-clockwise direction, then
the new point will be
5 cos 90 −sin 90 5 0
𝑇 = =
0 sin 90 cos 90 0 5
The new point after this rotation is (0,5)

b. We know that the induced matrix for counter-clockwise direction through an angle 𝜃
cos 𝜃 0 sin 𝜃
about the positive y- axis in 𝑅 3 is 0 1 0
−sin 𝜃 0 cos 𝜃
The original point is 𝑥 = (0,5,1) and it is rotated 90o in the counter-clockwise direction about
the positive y- axis, then the new point will be
0 cos 90 0 sin 90 0 1
𝑇 5 = 0 1 0 5 = 5
1 −sin 90 0 cos 90 1 0
The new point after this rotation becomes (1,5,0). Note that the original point was in the yz-
plane (Since the x component is zero) and a 90o counter-clockwise direction about the positive
y- axis would put the new point in the xy-plane with the z component becoming zero.
Orthogonality
Inner Product: If u and v are nx1 matrices, then their inner Example:-
product 𝑢. 𝑣 = 𝑢𝑇 𝑣 is a 1x1 matrix which is a real scalar. 1 5
Let 𝑢 = 2 ∈ 𝑅 and v = 7 ∈ 𝑅3 then
3

𝑢1 𝑣1 3 8
𝑢2 𝑣2 5
If 𝑢 = 𝑢3 and 𝑣 = 𝑣3 then the inner product of u and v is 𝑢𝑇 . 𝑣 = 1 2 3 7
⋮ ⋮ 8
𝑇
𝑢 . 𝑣 = 5 + 14 + 24
𝑢𝑛 𝑣𝑛
𝑢𝑇 . 𝑣 = 43
𝑣1
Product of two vector is a scalar
𝑣2
𝑢1 𝑢2 … 𝑢𝑛 . 𝑣3 = 𝑢1 𝑣1 + 𝑢2 𝑣2 + ⋯ 𝑢𝑛 𝑣𝑛

𝑣𝑛

Properties of inner product:


Let u, v, w be vectors in 𝑅𝑛 , and c be a scalar, then
 𝑢. 𝑣 = 𝑣. 𝑢
 𝑢 + 𝑣 . 𝑤 = 𝑢. 𝑤 + 𝑣. 𝑤
 (cu).v=c (u.v)=u.(cv)
 𝑢. 𝑢 ≥ 0, and 𝑢. 𝑢 = 0 iff u=0
2. 𝑅2 = 𝑅 × 𝑅 → 2 − Dimensional space
𝑅3 = 𝑅 × 𝑅 × 𝑅 → 3 − Dimensional space
𝑅4 = 𝑅 × 𝑅 × R × 𝑅 → 4 − Dimensional space
𝑅𝑛 = 𝑅 × 𝑅 × R × 𝑅 … … … … . .× 𝑅 → n − Dimensional space

 Norm denoted by “|| ||”


Length of the vector
Let u be any n – dimensional vector then it is given by 𝑢 = 𝑢. 𝑢 =distance from origin (0, 0, 0…….0)
Where 𝑢 = 𝑢1 , 𝑢2 , 𝑢3 … … … 𝑢𝑛
The length of vectors v is a non – negative scalar defined by 𝑣 = 𝑣. 𝑣
(OR)
The distance from the origin 𝑣 = 𝑣1 2 + 𝑣2 2 … … … … + 𝑣𝑛 2
Note:- Suppose “c” is any scalar and “u” is any vector then 𝑐. 𝑢 = 𝑐 . | 𝑢 |

Unit Vector:-
A vector whose length is one
If “v” is a non – zero vector it can be converted into a unit vector by using the formula
𝑣
Unit vector 𝑛ො = | 𝑣 |
Distance between two vectors:-
For u and v in 𝑅𝑛 , the distance between u and v is the length of the vector u-v, i.e.
Distance between u and v is given by | 𝒖 − 𝒗 |

Orthogonal vectors:
If two vectors 𝑢, 𝑣 ∈ 𝑅𝑛 are orthogonal vectors if 𝑢. 𝑣 = 0

Orthogonal Sets:
A set of vectors 𝑢1 , 𝑢2 , 𝑢3 … … … 𝑢𝑘 𝑖𝑛 𝑅𝑛 is an orthogonal set if each pair of distinct vectors from the set is
orthogonal
That is., 𝑢𝑖 . 𝑢𝑗 = 0 whenever 𝑖 ≠ 𝑗

Orthogonal Basis:

An orthogonal basis for an inner product space V is a basis for V whose vectors are mutually orthogonal.

Orthonormal Basis:
If the vectors of an orthogonal basis are normalized (i.e. vectors of unit length), the resulting basis is called an
orthonormal basis (or) It is a basis whose vectors have unit norm and are orthogonal to each other
Weights in the linear combination:
If {𝑢1 , 𝑢2, … , 𝑢𝑝 } be an orthogonal basis for a subspace W of 𝑅𝑛 , then for each y in W, the weights for the linear
combination 𝑦 = 𝑐1 . 𝑢1 + ⋯ + 𝑐𝑝 . 𝑢𝑝 can be defined as
𝑦.𝑢𝑗
𝑐𝑗 = j=1, 2,…,p
𝑢𝑗 .𝑢𝑗

Orthogonal Complements:
If a vector z is orthogonal to every vector in a subspace W in 𝑅𝑛 , then z is said to be orthogonal to W. The set of
all vectors that are orthogonal to W is called the orthogonal complement of W denoted by 𝑊 ⊥ .

Note: A vector x is in 𝑊 ⊥ if x is orthogonal to every vector in a set that spans 𝑊.


𝑊 ⊥ is a subspace of 𝑅 𝑛 .
Problems
1. Compute the length of vector 𝒗 = 𝟏, −𝟐, 𝟐, 𝟎 and also find an unit vectors and length of unit vector.
Solution: Given that 𝑣 = 1, −2, 2, 0
Length of vector = 𝑣
= 𝑣1 2 + 𝑣2 2 + 𝑣3 2 + 𝑣4 2
= 1 2 + −2 2 + 2 2 + 0 2
= 1+4+4+0= 9= 3
𝑣 =3

𝑣 1,−2,2,0
Unit vector 𝑛ො = | 𝑣 | = 3
1 −2 2
u = Unit vector 𝑛ො = , , , 0
3 3 3
1 2 −2 2 2 2 2
Length of an unit vector 𝑢 = + + + 0
3 3 3

1 4 4
= + +
9 9 9
9
= = 1
9
|u|=1
2. Calculate the length of vector 𝒗 = 𝟐, −𝟏, 𝟏, 𝟐 and also find an unit vector.
Solution: Given that 𝑣 = 2, −1, 1, 2
Length of vector = 𝑣
= 𝑣1 2 + 𝑣2 2 + 𝑣3 2 + 𝑣4 2
= 2 2 + −1 2 + 1 2 + 2 2
= 4 + 1 + 1 + 4 = 10
𝑣 = 10

𝑣 2,−1, 1, 2
Unit vector 𝑛ො = | 𝑣 | = 10
2 −1 1 2
u = Unit vector 𝑛ො = , , ,
10 10 10 10
𝟕 𝟑
3. Find the distance between the vectors 𝒖 = and v=
𝟏 𝟐
7 3
Solution: Given that 𝑢 = and v=
1 2
Distance between u and v = 𝑢 − 𝑣
= 𝑢−𝑣 𝑢−𝑣
= | 𝑢−𝑣 | 2
= 𝑢−𝑣
7 3
Let 𝑢 − 𝑣 = −
1 2
4
=
−1
𝑢−𝑣 = 4 2 + −1 2
= 16 + 1
𝑢−𝑣 = 17
𝟑 −𝟏 − 𝟏Τ𝟐
4. Show that 𝒖𝟏 , 𝒖𝟐 , 𝒖𝟑 is an orthogonal set where 𝒖𝟏 = 𝟏 , 𝒖𝟐 = 𝟐 𝒂𝒏𝒅 𝒖𝟑 = −𝟐 . Express 𝒚 =
𝟏 𝟏 𝟕Τ𝟐
𝟔
𝟏 as linear combination of vector S.
−𝟖
3 −1 − 1Τ2 6
Solution:- Given that where 𝑢1 = 1 , 𝑢2 = 2 , 𝑢3 = −2 and 𝑦 = 1
1 1 7Τ2 −8
To prove that 𝑢1 , 𝑢2 , 𝑢3 are orthogonal to each other
i.e., 𝑢1 𝑇 . 𝑢2 = 0, 𝑢2 𝑇 . 𝑢3 = 0 𝑎𝑛𝑑 𝑢3 𝑇 . 𝑢1 = 0
−1
𝑢1 𝑇 . 𝑢2 = 3 1 1 2 = −3 + 2 + 1 = 3 − 3 = 0
1
− 1Τ2 1 7 8
𝑢2 𝑇 . 𝑢3 = −1 2 1 −2 = − 4 + = − 4 = 4 − 4 = 0
2 2 2
7Τ2

1 7 3 3 7 4
𝑢3 𝑇 . 𝑢1 = − −2 1 =− −2+ = −2=2−2=0
2 2 1 2 2 2
We know that 𝑦 = 𝑐1 𝑢1+ 𝑐2 𝑢2 + 𝑐3 𝑢3
3
𝑦 ′ 𝑢1 𝑦 ′ 𝑢1 1 1 11
Where 𝑐1 = = = 2 6 1 −8 1 = 2 18 + 1 − 8 = 11 = 1
𝑢1 .𝑢1 𝑢1 2 32 +12 +12 11
1
𝑐1 = 1
−1
𝑦 ′ 𝑢2 𝑦 ′ 𝑢2 1 1 −12
𝑐2 = = = 2 6 1 −8 2 = 6 −6 + 2 − 8 = 6 = −2
𝑢2 .𝑢2 𝑢2 2 −1 2 +22 +12
1
𝑐2 = −2

𝑦 ′ 𝑢3 𝑦 ′ 𝑢3 1 − 1Τ2 1 6 56
𝑐3 = = = 6 1 −8 −2 = − − 2 −
𝑢3 . 𝑢3 𝑢3 2 2 1 49 2 2
1 2
7 2 7Τ2 4 + 4 + 4
−2 + −2 2 + 2
2
= −33 = −2
33
𝑐3 = −2
∴ 𝑦 = 𝑐1 𝑢1+ 𝑐2 𝑢2 + 𝑐3 𝑢3
𝑦 = 1𝑢1 − 2𝑢2 − 2𝑢3 as a linear combination of vector S.
Projections:
If w is a subspace of 𝑅𝑛 with orthonormal basis 𝑤1 , 𝑤2 … … … . . , 𝑤𝑚 and v is any vector in
𝑅𝑛 , then there exist unique vectors w in W and u in 𝑤 ⊥ such that v = w + u
Moreover 𝑤 = 𝑣. 𝑤1 𝑤1 + 𝑣. 𝑤2 𝑤2 +. … … … . . + 𝑣. 𝑤𝑚 𝑤𝑚 Which is called as the Orthogonal projection of
v and w and is denoted by 𝑃𝑟𝑜𝑗𝑤 𝑣 = 𝑣. 𝑤1 𝑤1 + 𝑣. 𝑤2 𝑤2 +. … … … . . + 𝑣. 𝑤𝑚 𝑤𝑚

Note: If 𝑤1 , 𝑤2 … … … . . , 𝑤𝑚 are orthogonal vectors then

𝑣. 𝑤1 𝑣. 𝑤2 𝑣. 𝑤𝑚
𝑃𝑟𝑜𝑗𝑤 𝑣 = 𝑤1 + 𝑤2 +. … … … . . + 𝑤
𝑤1 . 𝑤1 𝑤2 . 𝑤2 𝑤𝑚 . 𝑤𝑚 𝑚
Orthonormal Sets:

A set {𝑢1 , 𝑢2 , … , 𝑢𝑝 } is an orthonormal set if it is an orthogonal set of unit vectors.


If W is a subspace spanned by such a set, then {𝑢1 , 𝑢2 , … , 𝑢𝑝 } is called an orthonormal basis for W.
Eg. The standard basis {𝑒1 , 𝑒2 , … , 𝑒𝑛 } of 𝑅𝑛 .
𝟐 𝟏 𝟐
1. Let 𝑾 be the three dimensional subspace of 𝑹𝟑 with orthonormal basis 𝒘𝟏 , 𝒘𝟐 where 𝒘𝟏 = , − , −
𝟑 𝟑 𝟑
𝟏 𝟏
and 𝒘𝟐 = , 𝟎, . Using the standard inner product on𝑹𝟑 , find the orthogonal projection of 𝒗 =
𝟐 𝟐
𝟐, 𝟏, 𝟑 on 𝑾 and the vector u is orthogonal to every vector in 𝑾.
2 1 2 1 1
Solution: Given that 𝑤1 = , − , − , 𝑤2 = , 0, and 𝑣 = 2, 1, 3
3 3 3 2 2

We have 𝑤 = 𝑣. 𝑤1 𝑤1 + 𝑣. 𝑤2 𝑤2

2 1 2 1 1
𝑤= 2, 1, 3 . , − , − 𝑤1 + 2, 1, 3 . , 0, 𝑤2
3 3 3 2 2

4 1 2 2 3 4−1−6 2+0+3
𝑤= 3
− 3
− 1 𝑤1 + 2
+ 0+ 2
𝑤2 ⇒ 𝑤= 3
𝑤1 + 2
𝑤2

5 2 1 2 5 1 1
𝑤 = −𝑤1 + 𝑤2 ⇒ 𝑤 = − , − , − + , 0,
2 3 3 3 2 2 2

2 1 2 5 5 2 5 1 2 5 4+15 1 4+15
𝑤 = −3, , + , 0, ⇒ 𝑤 = −3 + 2, + 0, +2 ⇒ 𝑤 = − , 3, 6
3 3 2 2 3 3 6
11 2 19
𝑤= , ,
6 6 6

We know that if w is a subspace of 𝑅𝑛 with orthonormal basis 𝑤1 , 𝑤2 … … … . . , 𝑤𝑚 and v is any vector


in 𝑅𝑛 , then there exist unique vectors w in W and u in 𝑤 ⊥ such that v = w + u

Now 𝑢 = 𝑣 − 𝑤
11 2 19
= 2, 1, 3 − , , 6
6 6

11 2 19
= 2− , 1 − 6, 3 −
6 6

12−11 6−2 18−19


= , ,
6 6 6

1 4 −1
u= , ,
6 6 6
𝟑 −𝟏 −𝟏
𝟏𝟏 𝟔 𝟔𝟔
𝟏 𝟐 −𝟒
Ex. 2 Show that {𝒗𝟏 , 𝒗𝟐 , 𝒗𝟑 } is an orthonormal set of 𝑹𝟑 where 𝒗𝟏 = 𝟏𝟏
, 𝒗𝟐 = 𝟔
, 𝒗𝟑 = 𝟔𝟔
.
𝟏 𝟏 𝟕
𝟏𝟏 𝟔 𝟔𝟔

−3 2 1
Solution: Compute, 𝑣1 . 𝑣2 = + + =0
66 66 66

−3 −4 7
𝑣1 . 𝑣3 = + + =0
726 726 726

1 8 7
𝑣2 . 𝑣3 = − + =0
396 396 396
Thus, {𝑣1 , 𝑣2 , 𝑣3 } is an orthogonal set.

9 1 1
Now, 𝑣1 . 𝑣1 = 11 + 11 + 11 = 1
1 4 1
𝑣2 . 𝑣2 =
+ + =1
6 6 6
1 16 49
𝑣3 . 𝑣3 = + + =1
66 66 66
i.e., 𝑣1 , 𝑣2 , 𝑣3 are unit vectors. Thus {𝑣1 , 𝑣2 , 𝑣3 } is an orthonormal vector.
The Gram–Schmidt orthonormalization Process
It is a process for computing an orthonormal basis T = 𝑤1 , 𝑤2 … … … . . , 𝑤𝑚 for a non-zero subspace w of 𝑅𝑛 with
basis
𝑆 = 𝑢1 , 𝑢2 … … … . . , 𝑢𝑚
Working Rule:-
Step 1: - 𝑣1 = 𝑢1
Step 2:- Compute the vectors 𝑣2 , 𝑣3 … … … . 𝑣𝑚 successively one at a time by formula
𝑢𝑖 .𝑣1 𝑢𝑖 .𝑣2 𝑢𝑖 .𝑣𝑖−1
𝑣𝑖 = 𝑢𝑖 − 𝑣1 − 𝑣2 −. … … … . . − 𝑣𝑖−1
𝑣1 .𝑣1 𝑣2 .𝑣2 𝑣𝑖−1 .𝑣𝑖−1

(OR)
𝑢𝑖 ′ .𝑣1 𝑢𝑖 ′ .𝑣2 𝑢𝑖 ′ .𝑣𝑖−1
𝑣𝑖 = 𝑢𝑖 − 𝑣1 − 𝑣2 −. … … … . . − 𝑣𝑖−1
𝑣1 .𝑣1 𝑣2 .𝑣2 𝑣𝑖−1 .𝑣𝑖−1

The set of vectors 𝑇 ∗ = 𝑣1 , 𝑣2 … … … . . , 𝑣𝑚 is an orthogonal set.


1
Step 3:- Let 𝑤𝑖 = |𝑣 | 𝑣𝑖 (1 ≤ 𝑖 ≤ 𝑚) then T = 𝑤1 , 𝑤2 … … … . . , 𝑤𝑚 is an orthonormal basis for S.
𝑖

Remark:- u. 𝑐𝑣 = 0 for some 𝑐 ∈ 𝑅 some times in Gram – Schmidt process.


Problems
1. Consider the basis 𝑺 = 𝒖𝟏 , 𝒖𝟐 , 𝒖𝟑 for𝑹𝟑 where 𝒖𝟏 = 𝟏, 𝟏, 𝟏 , 𝒖𝟐 = −𝟏, 𝟎, −𝟏 and 𝒖𝟑 = −𝟏, 𝟐, 𝟑 .
Use the Gram- Schmidt process to transform 𝑺 to orthonormal basis for𝑹𝟑 .
Solution: Given that 𝑢1 = 1,1,1 , 𝑢2 = −1, 0, −1 and 𝑢3 = −1, 2, 3
Let 𝑣1 = 𝑢1 = 1,1,1
𝑢2 . 𝑣1 𝑢2 . 𝑣1
𝑣2 = 𝑢2 − 𝑣1 = 𝑢2 − 2 𝑣1
𝑣1 . 𝑣1 𝑣1
−1, 0, −1 . 1,1,1
= −1, 0, −1 − 2 1,1,1
12 + 12 + 12
−1, 0, −1 . 1,1,1
= −1, 0, −1 − 1,1,1
1,1,1 . 1,1,1
−1 + 0 − 1
= −1, 0, −1 − 1,1,1
1+1+1
−2
= −1, 0, −1 − 1,1,1
3
2
= −1, 0, −1 + 1,1,1
3
2 2 2
= −1 + , 0 + , −1 +
3 3 3
1 2 1
𝑣2 = − 3 , 3 , − 3 [Multiplay by 3 throughout the vector]
𝑣2 = −1, 2, −1 [Multiply by 3 throughout the vector]
𝑢3 . 𝑣1 𝑢3 . 𝑣2
𝑣3 = 𝑢3 − 𝑣 − 𝑣
𝑣1 . 𝑣1 1 𝑣2 . 𝑣2 2
−1, 2, 3 . 1,1,1 −1, 2, 3 . −1, 2, −1
= −1, 2, 3 − 1,1,1 − −1, 2, −1
1,1,1 . 1,1,1 −1, 2, −1 . −1, 2, −1
−1 + 2 + 3 1+4−3
= −1, 2, 3 − 1,1,1 − −1, 2, −1
1+1+1 1+4+1
4 2
= −1, 2, 3 − 1,1,1 − −1, 2, −1
3 6
4 2 4 2 4 1
= −1 − + ,− 2 − − ,3 − +
3 3 3 3 3 3
𝑣3 = −2, 0, 2
1
𝑣3 = −1, 0, 1 [Multiply by 2 throughout the vector]
𝑇∗ = 1,1,1 , −1, 2, −1 , −2, 0, 2 is the orthogonal basis for 𝑅3
𝑣 1 1 1 1
Now 𝑤1 = |𝑣1 | = 1,1,1 = , ,
1 12 +12 +12 3 3 3

𝑣2 1 −1 2 1
𝑤2 = = −1,2, −1 = , ,−
|𝑣2 | (−1)2 +22 + (−1)2 6 6 6
𝑣3 1 −1 1
𝑤3 = = −1,0,1 = , 0, −
|𝑣3 | (−1)2 +02 + (1)2 2 2
1 1 1 −1 2 1 −1 1
T = 𝑤1 = , , , 𝑤2 = , ,− , 𝑤3 = , 0, − are the orthonormal basis for 𝑅3
3 3 3 6 6 6 2 2
2. Let W be the subspace of 𝑹𝟒 with basis 𝑺 = 𝒖𝟏 , 𝒖𝟐 , 𝒖𝟑 where 𝒖𝟏 = 𝟏, −𝟐, 𝟎, 𝟏 , 𝒖𝟐 = −𝟏, 𝟎, 𝟎, −𝟏
and 𝒖𝟑 = 𝟏, 𝟏, 𝟎, 𝟎 . Use the Gram- Schmidt process to transform 𝑺 to orthonormal basis for W.
Solution: Given that 𝑢1 = 1, −2, 0, 1 , 𝑢2 = −1, 0, 0, −1 and 𝑢3 = 1, 1, 0, 0
Let 𝑣1 = 𝑢1 = 1, −2, 0, 1
𝑢2 . 𝑣1 𝑢2 . 𝑣1
𝑣2 = 𝑢2 − 𝑣1 = 𝑢2 − 2 𝑣1
𝑣1 . 𝑣1 𝑣1
−1, 0, 0, −1 . 1, −2, 0, 1
= −1, 0, 0, −1 − 2 1, −2, 0, 1
12 + (−2)2 +02 + 12
−1, 0, 0, −1 . 1, −2, 0, 1
= −1, 0, 0, −1 − 1, −2, 0, 1
1, −2, 0, 1 . 1, −2, 0, 1
−1 + 0 + 0 − 1
= −1, 0, 0, −1 − 1, −2, 0, 1
1+4+0+1
−2
= −1, 0, 0, −1 − 1, −2, 0, 1
6
2
= −1, 0, 0, −1 + 1, −2, 0, 1
6
1 2 1
= −1 + , − , 0, − 1
3 3 3
2 2 2
𝑣2 = − 3 , − 3 , 0, − 3 [Multiplay by 3 throughout the vector]
𝑣2 = −2, −2, 0, −2 [Multiply by 3 throughout the vector]
𝑢3 . 𝑣1 𝑢3 . 𝑣2
𝑣3 = 𝑢3 − 𝑣1 − 𝑣
𝑣1 . 𝑣1 𝑣2 . 𝑣2 2
1, 1, 0, 0 . 1, −2, 0, 1 1, 1, 0, 0 . −2, −2, 0, −2
= 1, 1, 0, 0 − 1, −2, 0, 1 − −2, −2, 0, −2
1, −2, 0, 1 . 1, −2, 0, 1 −2, −2, 0, −2 . −2, −2, 0, −2
1−2+0+0 −2 − 2 + 0 + 0
= 1, 1, 0, 0 − 1, −2, 0, 1 − −2, −2, 0, −2
1+4+0+1 4+4+0+4
1 4
= 1, 1, 0, 0 + 1, −2, 0, 1 = −2, −2, 0, −2
6 12
1 2 2 2 1 2
= 1 + − , 1 − − , 0 + 0 + 0, 0+ −
6 3 6 3 6 3
1 1
𝑣3 = , 0,0, −
2 2
𝑣3 = 1, 0,0, −1 [Multiply by 2 throughout the vector]
𝑇∗ = 1, −2, 0, 1 , −2, −2, 0, −2 , 1, 0,0, −1 is the orthogonal basis for 𝑅 3
𝑣 1 1 −2 1
Now 𝑤1 = |𝑣1| = 1, −2, 0, 1 = , , 0,
1 12 +(−2)2 +02 +12 6 6 6

𝑣2 1 −1 −1 1
𝑤2 = = −2, −2, 0, −2 = , , 0, −
|𝑣2 | (−2)2 +(−2)2 +02 + (−2)2 3 3 3
𝑣3 1 1 1
𝑤3 = = 1, 0,0, −1 = , 0,0, −
|𝑣3 | (1)2 +02 + 02 + (−1)2 2 2
1 −2 1 −1 −1 1 1 1
T = 𝑤1 = , , 0, , 𝑤2 = , , 0, − , 𝑤3 = , 0,0, − are the orthonormal basis for 𝑅4
6 6 6 3 3 3 2 2
𝟑 𝟖
3. Given that 𝒙𝟏 = 𝟎 and 𝒙𝟐 = 𝟓 is a basis for a subspace of W. Use the Gram- Schmidt process to
−𝟏 −𝟔
produce an orthogonal basis for W.
3 8
Solution: Given that 𝑥1 = 0 and 𝑥2 = 5
−1 −6
3
𝑣1 = 𝑥1 = 0
−1
𝑦 ′ 𝑢1 𝑦 ′ 𝑢1
𝑣2 = 𝑥2 − = 𝑥2 −
𝑢1 . 𝑢1 𝑢1 2
3
8 8 5 −6 0 3
𝑣2 = 5 − −1
2 0
2 2
3 + 0 + (−1) 2
−6 −1
8 24 + 0 + 6 3
𝑣2 = 5 − 0
9+0+1
−6 −1
8 30 3
𝑣2 = 5 − 0
10
−6 −1
8 9
𝑣2 = 5 − 0
−6 −3
8− 9
𝑣2 = 5 − 0
−6 +3
−1
𝑣2 = 5
−3
3 −1

𝑇 = 0 , 5 is the orthogonal basis.
−1 −3

3Τ 10 − 1Τ 35
T= 0 , 5Τ 35 are the orthonormal basis for W.
−1Τ 10 3Τ 35
Summary

Outcomes:

a. Discuss the fundamental concepts of linear transformation and its application in

engineering.

b. Describe the importance of orthogonality concepts to tackle engineering problems.


c. Identify the Gram- Schmidt Orthogonalization to solve problems.
Reference Links

1. Probability & Statistics for Engineers and Scientists, Walpole, Myers, Myers, Ye. Pearson Education 2013
2. Gilbert Strang,” Linear Algebra and its Applications”, Cengage Publishers, 4th Edition, 2014.
3. Probability & Statistics with Reliability, Queuing and Computer Applications, Kishor S. Trivedi, Prentice Hall of
India, second edition 2008.
4. M. P. Deisenroth, A.A. Faisal, and C. S. Ong, “Mathematics for Machine Learning” Cambridge University Press,
2019. (Book: https://mml-book.com)
Thank you

You might also like