Linear Algebra and Function Approximation
Linear Algebra and Function Approximation
2
A COMPILATION OF PROBLEMS IN
EDITED BY
Faculty Of Mathematics
DEPARTMENT OF MATHEMATICS
3
SYLLABUS
B.Tech. I Year I Sem. LTPC3104
LINEAR ALGEBRA AND DIFFERENTIAL CALCULUS
COMMON TO CE, EEE, ME, ECE, CSE & IT
Solution of a linear algebraic system of equations (homogeneous and non-homogeneous) using Gauss
elimination
Least squares solution of an over determined system of equations using QR factorization and the
generalized inverse- Estimation of the least squares error
The principle of least squares- Function approximation using polynomial, exponential and power curves
using matrix notation- Estimating the Mean squared error
TEXT BOOKS:
Advanced Engineering Mathematics, 5th edition, R.K.Jain and S.R.K.Iyengar, Narosa publishing house
Higher Engineering Mathematics- B.S.Grewal- Khanna publications
5
Fundamentals Of Vector and Matrix Algebra
UNIT - 1
Introduction: Let V be a non-empty set of certain objects, which may be vectors, matrices, functions or
some other objects. Each object is an element of V and is called a vector. The elements of V are denoted
by a, b, c, u, v, etc.
Example: 𝒂 = (𝑎1 , 𝑎2 , … 𝑎𝑛 ) 𝒃 = (𝑏1 , 𝑏2 , … 𝑏𝑛 ) where 𝒂, 𝒃 ∈ 𝑽
Assume that the two algebraic operations
1. Vector addition
𝒂 + 𝒃 = (𝑎1 , 𝑎2 , … 𝑎𝑛 ) + (𝑏1 , 𝑏2 , … 𝑏𝑛 ) = (𝑎1 + 𝑏1 , 𝑎2 + 𝑏2 , … 𝑎𝑛 + 𝑏𝑛 )
2. Scalar multiplication
𝛼𝒂 = 𝛼 (𝑎1 , 𝑎2 , … 𝑎𝑛 ) = (𝛼𝑎1 , 𝛼𝑎2 , … 𝛼𝑎𝑛 ) for any scalar 𝛼 are defined on elements of V.
Vector Space: A set V defines a vector space if for any elements 𝒂, 𝒃, 𝒄 in V and any scalars, 𝛼, 𝛽 the
following properties are satisfied.
Properties with respect to vector addition:
i. 𝒂 + 𝒃 in V
ii. 𝒂+𝒃 =𝒃+𝒂 (Commutative law)
iii. (𝒂 + 𝒃) + 𝒄 = 𝒂 + (𝒃 + 𝒄) (Associative law)
iv. 𝒂+𝟎 =𝟎+𝒂 (Existence of a unique zero element in V)
v. ( )
𝒂 + −𝒂 = 𝟎 (Existence of additive inverse or negative vector in V)
Properties with respect to scalar multiplication:
vi. 𝛼𝒂 is in V
vii. (𝛼 + 𝛽 )𝒂 = 𝛼𝒂 + 𝛽𝒂 (Left distributive law)
viii. (𝛼𝛽 )𝒂 = 𝛼(𝛽𝒂)
ix. 𝛼(𝒂 + 𝒃) = 𝛼𝒂 + 𝛼𝒃 (Right distributive law)
x. 1. 𝐚 = 𝐚 (Existence of multiplicative identity)
If the elements of V are real, then it is called a real vector space when the scalars 𝛼, 𝛽 are real numbers,
whereas V is called a complex vector space, if the elements of V are complex and the scalars 𝛼, 𝛽 are may
real or complex numbers or if the elements of V are real and the scalars 𝛼, 𝛽 are complex numbers.
Examples of a vector space:
1. The set V of real or complex numbers.
2. The set of real valued continuous functions f on any closed interval [𝑎, 𝑏]. The 0 vector defined in
property (iv) is the zero function.
3. The set of polynomials 𝑃𝑛 of degree less than or equal to n
4. The set V of n-tuples in 𝑅𝑛 𝑜𝑟 𝐶 𝑛
5. The set V of all 𝑚 × 𝑛 matrices. The element 0 defined in property 4 is the null matrix of order 𝑚 × 𝑛
6
Example 1: Let V be the set of all ordered pairs (𝑥, 𝑦), where 𝑥, 𝑦 are real numbers.
Let 𝑎̅ = (𝑥1 , 𝑦1 ) 𝑎𝑛𝑑 𝑏̅ = (𝑥2 , 𝑦2 ) be elements in V. Define the addition as
𝑎̅ + 𝑏̅ = (𝑥1 , 𝑦1 ) + (𝑥2 , 𝑦2 )
=(𝑥1 𝑥2 , 𝑦1 𝑦2 )
And the scalar multiplication is not a vector space.
Solution: Note that (1,1) is an element of V. From the given definition of vector addition, we find that
(𝑥1 , 𝑦1 ) + (1,1) = (𝑥1 , 𝑦1 ). This is true for the element (1,1). Therefore the element (1,1) plays the role
of ‘0’ element as defined in (iv) property.
1 1 1 1
Now there exist an element (𝑥 , 𝑦 ) such that (𝑥1 , 𝑦1 ) + (𝑥 , 𝑦 ) = (1,1)
1 1 1 1
1 1
The element (𝑥 , 𝑦 ) plays the role of additive inverse.
1 1
Now let 𝛼 = 1, 𝛽 = 2, be any two scalars. We have (𝛼 + 𝛽 )(𝑥1 , 𝑦1 ) = 3(𝑥1 , 𝑦1 ) = (3𝑥1 , 3𝑦1 )
And 𝛼(𝑥1 , 𝑦1 ) + 𝛽 (𝑥1 , 𝑦1 ) = 1(𝑥1 , 𝑦1 ) + 2(𝑥1 , 𝑦1 ) = (2𝑥12 , 2𝑦12 )
Therefore (𝛼 + 𝛽 )(𝑥1 , 𝑦1 ) ≠ 𝛼 (𝑥1 , 𝑦1 ) + 𝛽 (𝑥1 , 𝑦1 ) and property (vii) not satisfied and also property (ix)
not satisfied. Hence V is not a vector space.
Linearly independent of vectors: Let V be a vector space. A finite set {𝑉̅1 , 𝑉̅2 , 𝑉̅3 … . 𝑉̅𝑛 } of the elements
of V is said to be Linearly dependent if ∃ scalars 𝛼1 , 𝛼2 , 𝛼3 , … . 𝛼𝑛 not all Zeros such that
𝛼1 𝑉̅1 + 𝛼2 𝑉̅2 + 𝛼3 𝑉̅3 + ⋯ + 𝛼𝑛 𝑉̅𝑛 = 0
If the above equation is satisfied only for 𝛼1 = 𝛼2 = 𝛼3 = ⋯ = 𝛼𝑛 = 0. Then the set of vectors is said to
be linearly independent.
Note:1. the set of vectors {𝑉̅1 , 𝑉̅2 , 𝑉̅3 … . 𝑉̅𝑛 } is linearly dependent if and only if at least one element of the
set is a linear combination of the remaining elements.
2. 𝛼1 𝑉̅1 + 𝛼2 𝑉̅2 + 𝛼3 𝑉̅3 + ⋯ + 𝛼𝑛 𝑉̅𝑛 = 0. Gives a homogeneous system of algebraic equations. If
det(coefficient matrix) = 0, then the vectors 𝑉̅1 , 𝑉̅2 , 𝑉̅3 … . 𝑉̅𝑛 are linearly dependent. Otherwise if
det(coefficient matrix) ≠ 0, then the vectors 𝑉̅1 , 𝑉̅2 , 𝑉̅3 … . 𝑉̅𝑛 are linearly independent. 𝛼1 = 𝛼2 = 𝛼3 =
⋯ = 𝛼𝑛 = 0.
1 0 0
Example 2: let 𝑉̅1 = (−1) , 𝑉̅2 = (−1) 𝑎𝑛𝑑 𝑉̅3 = (0) be elements of ℝ3 . Show that the set of vectors
0 −1 1
{𝑉̅1 , 𝑉̅2 , 𝑉̅3 } is linearly independent.
7
1 0 0
Substituting 𝑉̅1 , 𝑉̅2 , 𝑉̅3 , we obtain 𝛼1 (−1) + 𝛼2 (−1) + 𝛼3 (0) = 0̅
0 −1 1
⟹ 𝛼1 = 0, −𝛼1 + 𝛼2 = 0, −𝛼2 + 𝛼3 = 0
The solution of these equations 𝛼1 = 𝛼2 = 𝛼3 = 0.
Therefore the given set of vectors are linearly independent.
OR
1 0 0
Det(𝑉̅1 , 𝑉̅2 , 𝑉̅3 ) = |−1 1 0| = 1 ≠ 0.
0 −1 1
Therefore the given set of vectors are linearly independent.
1 0 0 1
Example 3: Let 𝑉̅1 = (−1) , 𝑉̅2 = ( 1 ) 𝑎𝑛𝑑 𝑉̅3 = (2) , 𝑉̅4 = (0) be elements of ℝ3 . Show that the set
0 −1 1 3
of vectors {𝑉̅1 , 𝑉̅2 , 𝑉̅3 , 𝑉̅4 } is linearly dependent.
Solution: The given set of elements will be Linearly dependent if ∃ scalars 𝛼1 , 𝛼2 , 𝛼3 , 𝛼4 not all Zeros
such that
𝛼1 𝑉̅1 + 𝛼2 𝑉̅2 + 𝛼3 𝑉̅3 + 𝛼4 𝑉̅4 = 0̅ …..(1)
Substituting for 𝑉̅1 , 𝑉̅2 , 𝑉̅3 , 𝑎𝑛𝑑 𝑉̅4 , we get
𝛼1 + 𝛼4 = 0,
−𝛼1 + 𝛼2 + 2𝛼3 = 0,
−𝛼2 + 𝛼3 + 3𝛼4 = 0
The solution of system of equations is
𝛼1 = −𝛼4
5𝛼4
𝛼2 =
3
−4𝛼4
𝛼3 = 𝑎𝑛𝑑 𝛼4 is arbitrary.
3
5𝛼 4𝛼
From (1), we obtain −𝛼4 𝑉̅1 + 3 4 𝑉̅2 − 3 4 𝑉̅3 + 𝛼4 𝑉̅4 = 0̅
5 4
Then −𝑉̅1 + 3 𝑉̅2 − 3 𝑉̅3 + 𝑉̅4 = 0̅
Hence ∃ scalars not all zeros such that equation (1) satisfied.
Therefore the given set of vectors are linearly dependent.
8
Orthogonal Vectors:
The vectors 𝑉̅1 𝑎𝑛𝑑 𝑉̅2 are said to be orthogonal vectors if 𝑉̅1 . 𝑉̅2 = 0, (𝑉̅1𝑇 . 𝑉̅2 = 0)
1 −1
̅ ̅
Example 4: Let 𝑉1 = (1) 𝑎𝑛𝑑 𝑉2 = (−1) are orthogonal vectors since
2 1
−1
𝑉̅1 . 𝑉̅2 = 𝑉̅1𝑇 . 𝑉̅2 = (1 1 )
2 −1) = 0.
(
1
3𝑖 −4𝑖 0
Example 5: Let 𝑉̅1 = (4𝑖 ) , 𝑉̅2 = ( 3𝑖 ) 𝑎𝑛𝑑 𝑉̅3 = ( 0 ) are orthogonal vectors since
0 0 1+𝑖
𝑉̅2𝑇 . 𝑉̅3 = 𝑉̅3𝑇 . 𝑉̅1 = 𝑉̅1𝑇 . 𝑉̅2 = 0.
Orthonormal vectors:
The vectrors 𝑉̅1 𝑎𝑛𝑑 𝑉̅2 for which 𝑉̅1 . 𝑉̅2 = 0 𝑎𝑛𝑑 ‖𝑉̅1 ‖ = 1, ‖𝑉̅2 ‖ = 1 are called orthonormal vectors.
̅
𝑉 ̅
𝑉
Note: If 𝑉̅1 𝑎𝑛𝑑 𝑉̅2 are any vectors such that 𝑉̅1 . 𝑉̅2 = 0 then ‖𝑉̅1 ‖ , ‖𝑉̅2 ‖ are orthonormal vectors.
1 2
1 0 0
Example 6: (0) , (1) 𝑎𝑛𝑑 (0) are orthogonal vectors.
0 0 1
3𝑖/5 −4𝑖/5 0
0 ) are orthonormal vectors.
Example 7: (4𝑖/5) , ( 3𝑖/5 ) 𝑎𝑛𝑑 ((1+𝑖)
0 0 √2
Projection of Vectors:
̅ 𝑎𝑛𝑑 𝑉,
Given two vectors 𝑈 ̅ we can ask how far we will go in the direction of 𝑉̅ when we travel along 𝑈
̅.
9
The distance we travel in the direction of 𝑉̅ , while traversing 𝑈
̅ is called the component of 𝑢̅ with resprct
̅
to 𝑣̅ and is denoted 𝑐𝑜𝑚𝑝𝑣 𝑈.
̅, in the direction of 𝑉̅ is called the projection of 𝑈
The vector parallel to 𝑣̅ , with magnitude 𝑐𝑜𝑚𝑝𝑣 𝑈 ̅ into 𝑣̅
and is denoted 𝑃𝑟𝑜𝑗𝑣 𝑈̅.
̅ = ‖𝑃𝑟𝑜𝑗𝑣 𝑈
So, 𝑐𝑜𝑚𝑝𝑣 𝑈 ̅‖
̅ is a vector 𝑐𝑜𝑚𝑝𝑣 𝑈
Note 𝑃𝑟𝑜𝑗𝑣 𝑈 ̅ is a scalar.
̅ = ‖𝑈‖
From the picture 𝑐𝑜𝑚𝑝𝑣 𝑈
̅ into 𝑉̅ .
We wish to compute to find a formula for the projection of 𝑈
̅. 𝑉̅ = ‖𝑈‖. ‖𝑉 ‖𝑐𝑜𝑠𝜃
Consider 𝑈
̅.𝑉
𝑈 ̅
Thus ‖𝑉‖ = ‖𝑈‖ 𝑐𝑜𝑠𝜃
̅ .𝑉
𝑈 ̅
̅=
So 𝑐𝑜𝑚𝑝𝑣 𝑈 ‖𝑉‖
̅
𝑉 ̅.𝑉
𝑈 ̅
The unit vector in the same direction as 𝑉̅ is given by ‖𝑉‖. So 𝑃𝑟𝑜𝑗𝑣 𝑈
̅ = ( 2) V
‖𝑉‖
̅.
Example 10:
a. Find the projection of u = i + 2j onto v= I + j .
2
u.v = 1+2 = 3 , ‖𝑉 ‖2 = (√2) = 2
𝑢.𝑣 3 3𝑖 3𝑗
𝑝𝑟𝑜𝑗𝑣𝑢 = (‖𝑉‖2 )v = 2 (𝑖 + 𝑗) = +
2 2
10
𝑢.𝑣 7
𝑐𝑜𝑚𝑝𝑣𝑢 = ‖𝑉‖ = 5
Example 11:
1 1
If coordinates in the plane are rotated by 450 , the vector i is mapped to 𝑢= 𝑖+ 𝑗,
√2 √2
−1 1
and the vector 𝑗 is mapped to 𝑣 = 𝑖+ 𝑗 . Find the components of 𝑤 = 2𝑖 − 5𝑗 with respect to the
√2 √2
new coordinate vectors 𝑢 𝑎𝑛𝑑 𝑣. i.e. Express 𝑤 in terms of 𝑢 and 𝑣.
Solution:
−3 −7
𝑤. 𝑢 = , 𝑤. 𝑣 = . ‖𝑢‖ = ‖𝑣‖ = 1
√2 √2
So
−3 −7
𝐶𝑜𝑚𝑝𝑢 𝑊 = , 𝐶𝑜𝑚𝑝𝑣 𝑊 =
√2 √2
−3 −7
and 𝑤= 𝑢+ 𝑣
√2 √2
11
Symmetric, Skew symmetric and orthogonal matrices
Let A = [𝑎𝑖𝑗 ] is said to be real matrix if every element of A is real. A real square matrix A=
[𝑎𝑖𝑗 ] is said to be
c) Orthogonal: if 𝐴𝑇 = 𝐴−1 or 𝐴𝑇 𝐴 = 𝐼
Note: If A is orthogonal then |𝐴| = ±1
Example 12.
Examine the following
Complex matrix: A matrix 𝐴 = [𝑎𝑖𝑗 ] is said to be a complex matrix if there exists at least one
element 𝑎𝑖𝑗 of A is complex.
Complex conjugate: Let 𝐴 = [𝑎𝑖𝑗 ] be a complex matrix . The complex conjugate of A is denoted by
𝐴 and is obtained by replacing each 𝑎𝑖𝑗 of A by it’s complex conjugates.
12
Hermitian, skew hermitian and unitary matrices
A complex square matrix A is said to be
1 −𝑖 𝑖+2
Since 𝐴𝑇 = [ 𝑖 2 2 − 3𝑖 ]
−𝑖 + 2 2 + 3𝑖 3
1 𝑖 −𝑖 + 2
̅̅̅̅
(𝐴 𝑇 ) = [ −𝑖 2 2 + 3𝑖 ] = 𝐴
𝑖+2 2 − 3𝑖 3
−𝑖 1 + 2𝑖 3𝑖
Example 16: A= [−1 + 2𝑖 0 4 + 𝑖] is skewhermitian.
3𝑖 −4 + 𝑖 2𝑖
−𝑖 −1 + 2𝑖 3𝑖
𝑇
Since 𝐴 = [1 + 2𝑖 0 −4 + 𝑖 ]
3𝑖 4+𝑖 2𝑖
𝑖 −1 − 2𝑖 −3𝑖
̅̅̅̅
(𝐴 𝑇 ) = [1 − 2𝑖 0 −4 − 𝑖 ] = −𝐴
−3𝑖 4−𝑖 −2𝑖
𝑖 0 0
Example 17: If 𝐴 = [0 0 𝑖] then show that A is unitary and also skew hermitian.
0 𝑖 0
𝑖 0 0 −𝑖 0 0
Solution: 𝐴𝑇 = [0 0 𝑖] , ̅̅̅̅
(𝐴 𝑇) = [ 0 0 −𝑖 ] = −𝐴
0 𝑖 0 0 −𝑖 0
Thus, A is skewhermitian.
13
𝑖 0 0 −𝑖 0 0 1 0 0
̅̅̅̅
𝑇
Consider ( A)(𝐴 ) = [0 0 𝑖] [ 0 0 −𝑖 ] = [0 1 0] = 𝐼
0 𝑖 0 0 −𝑖 0 0 0 1
Thus, A is unitary.
Example 18:
1 3 1 2
0 11 5 3
Re duce the matrix Ato echelon form where A hence find its rank ?
2 5 3 1
4 1 1 5
Sol :
1 3 1 2
0 11 5 3
Given that A , R3 (2) R1 R3 , R4 (4) R1 R4
2 5 3 1
4 1 1 5
1 3 1 2
0 11 5 3
0 11 5 3
0 11 5 3 R3 R2 R3
R4 R2 R4
1 3 1
2
0 11 3
5
0 0 00
0 0 00
This is echelon form of the matrix A .
S in ce the no. of ( linearly independent ) non zero rows is 2
The rank A 2
14
1 1 1 0
4 4 3 1
Example 19: Deter min ethe value of ' b ' such that the rank of Ais 3 then where A
b 2 2 2
9 9 b 3
1 1 1 0
4 4 3 1
Given that A , R2 (4) R1 R2 , R3 (2) R1 R3 , R4 (9) R1 R4
b 2 2 2
Sol :
9 9 b 3
1 1 1 0
0 0 1 1
R3 (4) R2 R3 , R4 (3) R2 R4
b 2 0 4 2
0 0 b9 3
1 1 1 0
0 0 1 1
b 2 0 0 2
0 0 b6 0
R3 R4
1 1 1 0
0 0 1 1
0 0 b6 0
b 2 0 0 2
Case 1: if b 2 then | A | 1.0.8.(2) 0 thenthe rank of A 3
Case 2 : if b 6 thenthe non zero rows is 3, hence rank A 3
15
Sol :
Given system of equations are
x y 2 z 2;
3 x y z 6;
x 3 y 4 z 4
1 1 2 2
then augmented matrix [ A / B] 3 1 1 6
1 3 4 4
R2 (3) R1 R2
R3 (1) R1 R3
1 1 2 2
0 2 7 12 R (2) R R
3 1 3
0 2 2 2
1 1 2 2
0 2 7 12
0 0 2 2
Hence the rank ( A) 3 rank ( A / B ).
Thus the system has unique solution .
From backward substitution
2z 2 z 1
2 y 7 z 12 y 5 / 2
x y 2 z 2 x 5 / 2
Example 21:
Solve
x 2 y 3z 0
3x 4 y 4 z 0
7 x 10 y 12 z 0
Sol :
16
which can be exp ressed as AX 0
1 2 3
The augmented matrix [ A] 3 4 4 R2 (3) R1 R2 , R3 (7) R1 R3
7 10 12
1 2 3
0 2 5 R (2) R R
3 2 3
0 4 9
1 2 3
0 2 5
0 0 1
Hencethe rank ( A) 3.
Thus the system has trivial solution
From backward substitution
x 0
y 0
z 0
Sol :
The given system of non-homogeneous linear equations is in the form of AX=B
2𝑥 − 2𝑦 + 4𝑧 + 3𝑤 = 9;
𝑥 − 𝑦 + 2𝑧 + 2𝑤 = 6;
2𝑥 − 2𝑦 + 𝑧 + 2𝑤 = 3
𝑥−𝑦+𝑤 = 2
Let us take the augmented matrix of above equations, we get
2 2 4 3 9
1 1 2 2 6
[ A / B]
2 2 1 2 3
1 1 0 1 2
17
Apply elementary operations on [A/B] and reduce into echelon form
R1 R2
1 1 2 2 6
2 2 4 3 9
[ A / B] R2 (2) R1 R2 , R3 (2) R1 R3 , R4 (1) R1 R4
2 2 1 2 3
1 1 0 1 2
1 1 2 2 6
0 0 0 1 3
[ A / B]
0 0 3 2 9
0 0 2 1 4
R2 R4
1 1 2 2 6
0 0 2 1 4
[ A / B]
0 0 3 2 9
0 0 0 1 3
R3 (3 / 2) R2 R3
1 1 2 2 6
0 0 2 1 4
[ A / B]
0 0 0 1/ 2 3
0 0 0 1 3
R 4 (2) R3 R4
1 1 2 2 6
0 0 2 1 4
[ A / B]
0 0 0 1/ 2 3
0 0 0 0 3
Rank ( A) 3 4 Rank[ A / B]
Clearly the given system is inconsistent and therefore system has no solution
18
Sol :
Given x 2 y 3 z 6
x 3 y 5z 9
2 x 5 y az b
Let us consider the augmented matrix of given equations
1 2 3 6
[ A / B] 1 3 5 9
2 5 a b
R2 (1) R1 R2
R3 (2) R1 R3
1 2 3 6
0 1 2 3
0 1 a 6 b 12
R3 R3 R2
1 2 3 6
0 1 2 3
0 0 a 8 b 15
Case (1): If a=8 and b≠15, then
rank(A)=2 ≠3=rank[A/B]
In this case, the above system AX=B is said to be inconsistent. and it has no solution.
Case (2): if a≠8 and b is any value, then
rank(A)=3=rank[A/B]
In this case, the above system AX=B is said to be consistent. and it has unique solution.
b 15
(a 8) z b 15 z
a 8
3a 2b 6
y 2z 3 y 3 2z y
a 8
x 2 y 3z 6 x 6 2 y 3z
3a 2b 6 b 15
x 6 2( ) 3(b 15 / a 8) x
a 8 a 8
Case (3): If a=8 and b=15, then
rank(A)=2=rank[A/B]
in this case, the above system AX=B is said to be consistent. and it has infinite
number of solutions
n r 3 2 1
Then
x 2 y 3z 6
19
y 2z 3
Let z k (arbitary var iable)
k
Hence the solution is 3 2k
k
3.
Solve the following equations
x y 2 z 3w 0
x 2y z w 0
4 x y 5 z 8w 0
5x 7 y 2 z w 0
Sol :
The given system of hom ogeneous linear equations can be exp ressed as AX 0.
x y 2 z 3w 0
x 2y z w 0
4 x y 5 z 8w 0
5x 7 y 2 z w 0
1 1 2 3
1 2 1 1
where the coefficient matrix A
4 1 5 8
5 7 2 1
R2 (1) R2
R3 R2 R3
R4 (2) R2 R4
1 1 2 3
0 3 3 4
0 3 3 4
0 12 12 16
R2 (1) R1 R2
R3 (4) R1 R3
R4 (5) R1 R4
1 1 2 3
0 3 3 4
0 0 0 0
0 0 0 0
20
Rank ( A) 2 4 no.of var iables
Thus the given system has non trivial solution.
then equations are
x y 2 z 3w 0
3 y 3z 4w 0
Choose z k2 and w k1
1 4
Then y (3z 4w) k2 k1
3 3
4 5
x 2 z 3w y 2k2 3k1 k2 k1 x k2 k1
3 3
5
k2 k1
x 3
y
k2 k1 4
z 3
k
w 2
k
1
1 1
4.
.Use Gram Schmidth method to makethe vectors a 1 and b 0 orthogonal.
1 2
Sol :
1 1
Given a 1 , b 0
1 2
Aa
AT b
B b .A
AT A
1
A a 1
1
AT 1 1 1
AT b 3
AT A 3
1 1 1 1 0
3
B 0 1 0 1 1
3
2 1 2 1 1
A and B are orthogonal
21
1 1/ 3
1
q1 1 1/ 3
3
1 1/ 3
0
q2 1/ 2
1/ 2
1/ 3 0
the orthogonal matrix is Q 1/ 3 1/ 2
1/ 3 1/ 2
Exercise
1 2 1 2
1. Find the value of ‘k’ such that the matrix 𝐴 = [2 1 2 1] is of rank 2?
7 8 𝑘 8
2 3 1 −2
2. Find the rank of the matrix 𝐴 = [1 2 0 2]
1 4 −2 14
1 2 𝑥 5
3. the least squares approximate solution of the over determined system [2 1 ] [𝑦 ] = [ 4 ]
1 −1 𝑧 −1
4 1 − 3𝑖
9. Prove that the matrix [ ] is hermitian matrix.
1 + 3𝑖 7
22
UNIT-II
MATRIX EIGENVALUE PROBLEM AND QUADRATIC FORMS
A. x = λ. x
Y
Ax= λx
Y
λx X
23
Applications of eigenvalues and eigenvectors
1. Using singular value decomposition for image compression : This is a note expressing how you
can compress an image by throwing away the small eigenvalues of AAT . It takes an 8 megapixel
image of an Allosarus and shows how the image looks after compressing by selecting
1,10,25,50,100 and 200 of the largest singular values .
2. Deriving special relativity is more natural in the language of linear algebra : In fact ,
Einstein’s second postulate really states that @ Light is an eigenvector of the Lorentz
transformation .
3. Spectral Clustering :Whether it’s in plants and biology , medical imaging , business and
marketing , understanding the connection between fields of Facebook or even criminology ,
clustering is an extremely important part of modern data analysis . It allows people to find
important subsystems or patterns inside noisy data sets . One such method is spectral clustering
which uses the eigenvalues of the graph of a network . Even the eigenvector of the second
smallest eigenvalue of the Laplacian matrix allows us to find the two largest clusters in a network
4. Dimensionality Reduction / PCA : The principal components correspond to the largest
eigenvalues of ATA and yields the least squared projection on to a smaller dimensional hyperplane
and the eigenvectors becomes the axes of the hyperplane . Dimensionality reduction is extremely
useful in machine learning and data analysis as it allows one to understand where most of the
variation from data comes from
5. Low rank factorization for collaborative prediction : This what Netflix does to what rating
you’ll have for a movie you have not yet watched . It uses the SVD and throws awat the smallest
eigenvalues of ATA
6. The Google Page Rank Algorithm : The largest eigenvector of the graph of the internet is how
the pages are ranked
QUADRATIC FORMS
Eigenvalues and eigenvectors can be used to solve the rotation of axes problem . Recall that classifying
the graph of the quadratic equation
𝑎𝑥 2 + 𝑏𝑥𝑦 + 𝑐𝑦 2 + 𝑑𝑥 + 𝑒𝑦 + 𝑓 = 0
Is fairly straight forward as long as the equation has no xy-term ( that is b=0) . If the equation has xy-term
, however , then the classification is accomplished most easily by first performing a rotation of axes that
eliminates the xy-term . The resulting equation ( relative to the new x1 y1 axes ) will then be of the form
𝑎′ (𝑥 ′ )2 + 𝑐 ′ (𝑦 ′ )2 + 𝑑 ′ 𝑥 ′ + 𝑒 ′ 𝑦 ′ + 𝑓 ′ = 0
You will that the coefficient a1 and c1 are eigenvalues of the matrix
𝑎 𝑏/2
[ ]
𝑏/2 𝑐
The expression 𝑎𝑥 2 + 𝑏𝑥𝑦 + 𝑐𝑦 2
24
Example : Find the matrix of a quadratic form associated with each quadratic equation
a. 4𝑥 2 + 9𝑦 2 − 36 = 0 b. 13𝑥 2 − 10𝑥𝑦 + 13𝑦 2 − 72 = 0
𝑥2 𝑦2
+ =1
Solution : 32 22 -3
4 0
a) Here a=4 . b=0 and c=9 the matrix is 𝐴 = [ ]
0 9
1
13 −5]
b) Because a=13 , b= - 10 and c= 13 the matrix is 𝐴 = [ -2 -1 0 1
−5 13
2
𝑥2 𝑦2
In standard form, the equation 4x2+9y2-36=0 is + 22 = 1 Figure 1
32
Which is the equation of the ellipse shown in the figure 1 . Although it is not apparent by
inspection the graph of the equation 13𝑥 2 − 10𝑥𝑦 + 13𝑦 2 − 7 = 0 Is similar. In fact
0
when you rotate the x and y axes counter clock wise 45 to form a new 1
(𝑥 ′ )2 (𝑦 ′)2
x1 y1 – coordinate system . This equation takes of the form + =1
32 22
450
25
5) Hermitian Matrix have orthogonal eigenvectors
6) Sum Of The eigenvalues of A=Trace (A)
7) Product of eigenvalues of A=|𝐴|
8) If A is real symmetric matrix its eigenvalues are always real.
Sol;-
3 7 5
Find the sum and product of the eigen values of A 2 4 3
1 2 2
2.
Sol :
We knowthat the sumof theeigenvalues trace A .
The product of the eigen values | A |.
Let us consider the eigen values of the matrix are 1 , 2 , 3.
The sum of the eigen values trace A 1 2 3 3 4 2 3.
The product of the eigen values | A | 1.2 .3 3(8 6) 7(4 3) 5(0) 1.
8 4
3. Find the eigen values and eigen vectors of A
2 2
Sol :
26
The charectristic equation | A I | 0
8 4
0
2 2
(8 )(2 ) 8 0
2 10 24 0
4, 6
The corresponding eigen ve cot rs for 4, 6.
Case 1 : The eigen ve cot r corresponding to 4
(A I )X 0
8 4 4 x 0
2 2 4 y 0
4 4 x 0
2 2 y 0
R2 (1/ 2) R1 R2
4 4 x 0
0 0 y 0
4x 4 y 0
Choose y k then x k
x k 1
k
y k 1
Case 2 : The eigen ve cot r corresponding to 6
(A I )X 0
8 6 4 x 0
2 2 6 y 0
2 4 x 0
2 4 y 0
R2 (1) R1 R2
2 4 x 0
0 0 y 0
2x 4 y 0
choose y k then x k / 2
x k 1
k
y k / 2 1/ 2
27
4.
If the eigen values of a square matrix Aof order 2 2 are 4 and 6,
then find the following
(i ) The eigen value of AT
(ii ) The eigen value of A1
(iii ) The eigen value of B KA where k 1/ 2
(iv) The eigen value of A2
(v) The eigen value of B A KI where k 2
(vi ) The eigen value of B A KI where k 1
Sol :
𝟑 −𝟏 𝟎
5) What is the quadratic form associated with the matrix 𝑨 = [−𝟏 𝟐 −𝟏]
𝟎 −𝟏 𝟑
Sol :
𝑥1
If X= [𝑥2 ] then
𝑥3
3 −1 0 𝑥1
𝑓(𝑋) = 𝑋 𝐴𝑋 = [𝑥1
𝑇 𝑥2 𝑥3 ] [−1 2 −1] [𝑥2 ] = 3𝑥1 2 + 2𝑥2 2 + 3𝑥3 2 − 2𝑥1 𝑥2 − 2𝑥2 𝑥3
0 −1 3 𝑥3
28
6) Find the nature of the quadratic form 𝟐𝒙𝟐 + 𝟐𝒚𝟐 + 𝟐𝒛𝟐 + 𝟐𝒚𝒛
Sol :
2 0 0
The associated symmetric matrix of the given quadratic form is 𝐴 = [0 2 1]
0 1 2
The roots of the characteristic equationdet(𝐴 − 𝜆𝐼 ) = 0 are: 1, 2, 3
All the roots are positive. So, the quadratic form is positive definite.
7) Find the index and signature of the quadratic form 3𝒙𝟐 + 𝟐𝒚𝟐 + 𝟑𝒛𝟐 − 𝟐𝒙𝒚 − 𝟐𝒚𝒛
Sol :
3 −1 0
The associated symmetric matrix of the given quadratic form is 𝐴 = [−1 2 −1]
0 −1 3
The roots of the characteristic equation det(𝐴 − 𝜆𝐼 ) = 0 are 3, 1, 4 also the sum of the squares
form is 3𝑦1 2 + 𝑦2 2 + 4𝑦3 2 .
Here r = rank(A) = 3, s= index= no. of positive terms =3, Signature = 2s – r = 3.
𝟒 𝟏
8) Find the matrix P which diagonalize the matrix 𝑨 = [ ]
𝟐 𝟑
Sol :
The characteristic equation of A is det(𝐴 − 𝜆𝐼 ) = 𝜆2 − 7𝜆 + 10 = 0
Clearly, the eigen values of Aare 𝜆 = 2 𝑎𝑛𝑑 5
The corresponding eigen vectors are
1
𝑓𝑜𝑟 𝜆 = 2, 𝑋1 = 𝑘 [ ] , 𝑎𝑛𝑑
−2
1
𝑓𝑜𝑟 𝜆 = 5 , 𝑋2 = 𝑘 [ ]
1
1 1 4 1
Therefore the modal matrix 𝑃 = [ ] which diagonalize the matrix 𝐴 = [ ]
−2 1 2 3
𝟏 𝟏
9) Diagonalize the matrix 𝑨 = [ ] over Real. Is it diagonalizable over Complex?
−𝟏 𝟏
Sol :
The characteristic equation of A is det(𝐴 − 𝜆𝐼 ) = 𝜆2 − 2𝜆 + 2 = 0
29
Therefore the Eigen roots are 𝜆 = 1 ± 𝑖
Clearly, the matrix A has no real Eigen values, so not diagonalizable over real
But it is diagonalizable over Complex, means that we can find an invertible complex matrix
1 1 1+𝑖 0
𝑃=[ ]such that 𝑃 −1 𝐴𝑃 = 𝐷 = [ ]
𝑖 −𝑖 0 1−𝑖
𝟏 −𝟐 𝟑 𝟒 −𝟓
−𝟐 𝟔 𝟎 −𝟏 𝟖
10) Is the matrix 𝑨 = 𝟑 𝟎 𝟕 −𝟒 −𝟔 orthogonally diagonalizable? Why?
𝟒 −𝟏 −𝟒 −𝟗 𝟏
[−𝟓 𝟖 −𝟔 𝟏 𝟒 ]𝟓𝒙𝟓
Sol :
We know that every symmetric matrix is orthogonally diagonalizable . The given matrix
A is symmetric of order 5x5. Hence the matrix A is orthogonally diagonalizable.
𝟓 𝟒]
11) Diagonalize the matrix 𝑨 = [ and hence find 𝑨𝟏𝟓
𝟏 𝟐
Sol :
We can easily diagonalize the given matrix of order 2x2 . The modal
4 1 6 0
matrix 𝑃 = [ ] such that 𝑃−1 𝐴𝑃 = 𝐷 = [ ] where 6 and 1 are the eigenvalues
1 −1 0 1
15
Therefore, 𝐴15 = 𝑃𝐷15 𝑃−1 = 𝑃 [6 0] 𝑃−1
0 1
𝟎 𝟏 𝟎
12. If possible, diagonalize the matrix 𝑨 = [𝟎 𝟎 𝟏]
𝟐 −𝟓 𝟒
Sol :
The Characteristic equation of A is :det(𝐴 − 𝜆𝐼 ) = 𝜆3 − 4𝜆2 + 5𝜆 − 2 = 0
The Eigenvalues are : 𝜆 = 1,1,2
1 1/4
The corresponding Eigenvectors for 𝜆1 = 𝜆2 = 1is 𝑋1 = 𝑘 [1] 𝑎𝑛𝑑 𝑓𝑜𝑟 𝜆3 = 2 𝑖𝑠 𝑋2 = 𝑘 [1/2]
1 1
Since the Eigenvalue 𝜆1 = 𝜆2 = 1 has algebraic multiplicity 2, but geometric multiplicity 1, A is
not diagonalizable.
30
Long Answer Questions
3 1 4
5. Find the eigenvalues and eigenve c tors of A 0 2 6
0 0 5
Sol :
Sin ce the matrix A is upper triangluar , the eigen values arethe
diagonal elements of A
The eigen values of A are 2,3 and 5
which are diagonal elements of A
The eigen vectors of A are
Case 1: for 2
(A I)X 0
3 2 1 4 x 0 1 1 4 x 0
0 22 6 y 0 0 0 6 y 0
0 0 5 2 z 0 0 0 3 z 0
R3 (1/ 2) R2 R3
1 1 4 x 0
0 0 6 y 0
0 0 0 z 0
6z 0 z 0
x y 4z 0
x y 0
Choose y k then x k
The corresponding eigen vector for 2
x k 1
X1 y k k 1
z 0 0
Case 2 : for 5
(A I )X 0
2 1 4 x 0
0 3 6 y 0
0 0 0 z 0
2 x y 4 z 0
3 y 4 z 0
31
choose z k then y 2k , x 3k
the corresponding eigen vector for 5
x 3k 3
X 2 y 2k k 2
z 0 1
Case 3 : Eigen ve c tor for 3
(A I )X 0
0 1 4 x 0
0 1 6 y 0
0 0 2 z 0
R2 R1 R2
0 1 4 x 0
0 0 10 y 0
0 0 2 z 0
R3 (1/ 5) R2 R3
0 1 4 x 0
0 0 10 y 0
0 0 0 z 0
y 4z 0
z 0 then y 0
let x k
x 1
y 0
z 0
the eigen ve c tor corresponding to 3
2 1 4 x 0
0 3 6 y 0
0 0 0 z 0
2 x y 4 z 0 & 3 y 6 z 0
Choose z k then y 2k , x 3k
the corresponding eigen vector for 3
x 3k 3
X 2 y 2k k 2
z 0 1
32
1 0 1
Find the Eigen values and eigen vectors of A 1 2 1
6.
2 2 3
Also det er min e, whether the eigen vectors are orthogonal ?
Sol :
The charecteristic equation of A is | A I | 0
1 0 1
1 2 1 0
2 2 3
3 6 2 11 6 0
( 1)( 2)( 3) 0
1, 2,3 are distinct eigen values of A
33
The eigen ve c tor corresponding to eigen value 2 :
1 2 0 1 x 0
1 22 1 y 0
2 2 3 2 z 0
1 0 1 x 0
1 0 1 y 0
2 2 1 z 0
R1 R1 R2 then R3 2 R1 R3
1 0 1 x 0
0 0 0 y 0
0 2 1 z 0
x z 0
2y z 0
Choose z k then x k and y k / 2
x 1 2
y k 1/ 2 (or ) 1
z 1 2
34
1 1 1 x 0
0 2 1 y 0
0 0 0 z 0
x yz 0
2 y z 0
Choose z k then x k / 2 and y k / 2
x 1/ 2 1
y k 1/ 2 (or ) 1
z 1 2
The eigen ve c tors 1, 2,3 respectively are
1 2 1
X 1 1 , X 2 1 & X 3 1
0 2 2
Sin ce X 1 X 2 3 0
T
X 2T X 3 7 0
X 3T X 1 0
the eigen vectors X 1 and X 3 only orthogonal .
1 1
7. .Use Gram Schmidth method to makethe vectors a 1 and b 0 orthogonal.
1 2
Sol :
1 1
Given a 1 , b 0
1 2
Aa
AT b
B b T .A
A A
1
A a 1
1
AT 1 1 1
AT b 3
AT A 3
35
1 1 1 1 0
3
B 0 1 0 1 1
3
2 1 2 1 1
A and B are orthogonal
1 1/ 3
1
q1 1 1/ 3
3
1 1/ 3
0
q2 1/ 2
1/ 2
1/ 3 0
the orthogonal matrix is Q 1/ 3 1/ 2
1/ 3 1/ 2
8.
Find an orthogonal matrix that will diagonalisethe real symmetric matrix
1 2 3
A 2 4 6 and also find the resulting diagonal matrix.
3 6 9
Sol :
The charecterstic equ. | A I | 0
1 2 3
i.e., 2 4 6 0
3 6 9
3 14 2 0
the eigen values are 0, 0,14
The eigen vector corresponding to 0
1 0 2 3 x 0
2 40 6 y 0
3 6 9 0 z 0
1 2 3 x 0
2 4 6 y 0
3 6 9 z 0
36
R2 R2 2 R1
R3 R3 3R1
1 2 3 x 0
0 0 0 y 0
0 0 0 z 0
Let z k1 , y k2
then x 2 y 3z 0
& x 2k2 3k1
x 3 2
y k1 0 k2 1
z 1 0
The eigen vector corresponding to 14
1 14 2 3 x 0
2 4 14 6 y 0
3 6 9 14 z 0
13 2 3 x 0
2 10 6 y 0
3 6 5 z 0
R2 13R2 2 R1
R3 13R3 3R1
13 2 3 x 0
0 126 84 y 0
0 84 56 z 0
R3 126 R3 84 R2
13 2 3 x 0
0 126 84 y 0
0 0 0 z 0
Let z k
then y 2k / 3
& x k /3
x 1
y k / 3 2
z 3
37
3 2 1
let X 1 0 , X 2 1 , X 3 2
1 0 3
X 1 10 ; X 2 5 : X 3 14
3 / 10 2 / 5 1/ 14
P 0 1/ 5 2 / 14
0
1/ 10 3 / 14
X 1 , X 2 0
X 2 , X 3 0
X 3 , X 1 0
a 2 1
let X 1 b X 2 1 , X 3 2
c
0 3
2a b 0
a 2b c 0
let b k
then a k / 2
c 5k / 6
a 3
b k / 6 6
c 5
3 2 1
let X 1 6 X 2 1 , X 3 2
5 0 3
38
3 / 70 2 / 5 1/ 14
P 6 / 70 1/ 5 2 / 14
5 / 70 0 3 / 14
3 / 70 6 / 70 5 / 70
PT 2 / 5 1/ 5 0
1/ 14 2 / 14 3 / 14
D PT AP
3 / 70 6 / 70 5 / 70 1 2 3 3 / 70 2 / 5 1/ 14
2 / 5 1/ 5 0 2 4 6 6 / 70 1/ 5 2 / 14
1/ 14 2 / 14 3 / 14 3 6 9 5 / 70 0 3 / 14
0 0 0
D 0 0 0
0 0 14
𝟏 𝟔 𝟏
9. Diagonalize A= [𝟏 𝟐 𝟎]and hence find A8 . Find the modal matrix.
𝟎 𝟎 𝟑
Sol :
The Characteristic equation of A is :det(𝐴 − 𝜆𝐼 ) = 𝜆3 − 6𝜆2 + 5𝜆 + 12 = 0
The Eigenvalues are : 𝜆 = −1, 3, 4
−3
The corresponding Eigenvectors for 𝜆1 = −1 𝑖𝑠 𝑋1 = 𝑘 [ 1 ] , 𝜆2 = 3 𝑖𝑠 𝑋2 =
0
1 2
𝑘 [ 1 ] 𝑎𝑛𝑑 𝑓𝑜𝑟 𝜆3 = 4 𝑖𝑠 𝑋3 = 𝑘 [1]
−4 0
−3 1 2 −4 8 1
−1 1
The modal matrix is 𝑃 = 1 [ ]
1 1 ; 𝑃 = 20 0 [ 0 −5] Further more 𝑃−1 𝐴𝑃 =
0 −4 0 4 12 4
−1 0 0
𝐷 = [ 0 3 0] as can be easily checked.
0 0 4
26215 78642 24574
Also 𝐴8 = 𝑃𝐷8 𝑃−1 = [13107 39322 11467]
0 0 6561
39
𝟐 𝟏 𝟏
10. Orthogonally diagonalize the matrix𝑨 = [𝟏 𝟐 𝟏]
𝟏 𝟏 𝟐
Sol :
The Characteristic equation of A is :det(𝐴 − 𝜆𝐼 ) = 𝜆3 − 6𝜆2 + 9𝜆 − 4 = 0
The Eigenvalues are : 𝜆 = 1, 1, 4
1
The corresponding Eigenvectors 𝑓𝑜𝑟 𝜆1 = 4 𝑖𝑠 𝑋1 = 𝑘 [1] and for 𝜆3 = 𝜆2 = 1is 𝑋2 =
1
−1 −1
𝑘 [ 0 ] 𝑎𝑛𝑑 𝑋3 = 𝑘 [ 1 ] .We need three orthonormal Eigenvectors. First, apply the Gram-
1 0
1
−1 −1 −1 −2
Schmidt process to [ 0 ] 𝑎𝑛𝑑 [ 1 ] to obtain [ 0 ] 𝑎𝑛𝑑 [ 1 ] . The new vector,
1
1 0 1 −2
which has been
−1 1
constructed to be orthogonal to [ 0 ] and so is orthogonal to [1] .
1 1
Thus we have three mutually orthogonal vectors, and all we need to do is normalize them and
construct a matrix Q with these vectors as its columns. We find that
1/√3 −1/√2 −1/√6
𝑄 = [1/√3 0 2/√6 ]
1/√3 1/√2 −1/√6
4 0 0
And can be easily verify that 𝑄𝑇 𝐴𝑄 = 𝐷 = [0 1 0]
0 0 1
11. Find the orthogonal transformation which transforms the quadratic form 𝒙𝟏 𝟐 + 𝟑𝒙𝟐 𝟐 + 𝟑𝒙𝟑 𝟐 −
𝟐𝒙𝟐 𝒙𝟑 to a canonical form.
Sol :
The coefficient matrix A of the given quadratic form is
1 0 0
𝐴 = [0 3 −1]
0 −1 3
The characteristic equation of A is det(𝐴 − 𝜆𝐼 ) = 𝜆3 − 7𝜆2 + 14𝜆 − 8 = 0
By solving this polynomial equation we can get the latent roots: 𝜆 = 1, 2, 4
And the corresponding eigen vectors are
40
1 0 0
𝜆1 = 1 𝑖𝑠 𝑋1 = 𝑘 [0] , 𝜆2 = 2 𝑖𝑠 𝑋2 = 𝑘 [1] 𝑎𝑛𝑑 𝑓𝑜𝑟 𝜆3 = 4 𝑖𝑠 𝑋3 = 𝑘 [ 1 ]
0 1 −1
1 0 0
𝑋 𝑋2 𝑋3
𝑒1 = = [0 ] , 𝑒2 = = [1/√2] 𝑎𝑛𝑑 𝑒3 = = [ 1/√2 ]
‖𝑋1 ‖ ‖𝑋2 ‖ ‖𝑋3 ‖
0 1/√2 −1/√2
1 0 0
Further more 𝑃̂−1 𝐴𝑃̂ = 𝐷 = [0 2 0] as can be easily checked.
0 0 4
Now consider the required non-singular linear transformation 𝑋 = 𝑃̂ 𝑌 that transforms or reduces
the given quadratic form to canonical form.
We know that the quadratic form is 𝑄 = 𝑋 𝑇 𝐴𝑋 where 𝑋 = 𝑃̂ 𝑌, then
𝑇
= (𝑃̂𝑌) 𝐴(𝑃̂𝑌)
= (𝑌 𝑇 𝑃̂𝑇 )𝐴(𝑃̂ 𝑌)
= 𝑌 𝑇 (𝑃̂𝑇 𝐴𝑃̂)𝑌
12. Reduce the quadratic form 𝟑𝒙𝟏 𝟐 + 𝟑𝒙𝟐 𝟐 + 𝟑𝒙𝟑 𝟐 + 𝟐𝒙𝟏 𝒙𝟐 + 𝟐𝒙𝟏 𝒙𝟑 − 𝟐𝒙𝟐 𝒙𝟑 into sum of squares
form by an orthogonal transformation.
Sol :
The coefficient matrix A of the given quadratic form is
41
3 1 1
𝐴 = [1 3 −1]
1 −1 3
The characteristic equation of A is det(𝐴 − 𝜆𝐼 ) = 𝜆3 − 9𝜆2 + 24𝜆 − 16 = 0
By solving this polynomial equation we can get the latent roots: 𝜆 = 1, 4, 4
And the corresponding eigen vectors are
−1 1 1
𝜆1 = 1 𝑖𝑠 𝑋1 = 𝑘 [ 1 ] , 𝜆2 = 4 𝑖𝑠 𝑋2 𝑎𝑛𝑑 𝑋3 = 𝑘1 [0] + 𝑘2 [1]
1 1 0
Clearly, the vectors 𝑋1 , 𝑋3 𝑎𝑛𝑑 𝑋1 , 𝑋2 are pair wise orthogonal but, the vectors 𝑋2 , 𝑋3 are not
−1
pair wise orthogonal. Consider 𝑢1 = 𝑋1 = [ 1 ]
1
1
Using Gram-Schmidt process we can find an orthonormal vectors 𝑢2 = [0] 𝑎𝑛𝑑
1
1/2
𝑢3 = [ 1 ]
−1/2
The normalized vectors of the above Eigen vectors 𝑢1 , 𝑢2 , 𝑢3 are:
1 0 0
Further more 𝑃̂−1 𝐴𝑃̂ = 𝐷 = [0 4 0] as can be easily checked.
0 0 4
Now consider the required non-singular linear transformation 𝑋 = 𝑃̂ 𝑌 that transforms or reduces
the given quadratic form to canonical form.
42
1 0 0 𝑦1
therefore, the canonical form = [𝑦1 𝑦2 𝑦3 ] [0 4 0] [𝑦2 ]
0 0 4 𝑦3
= 𝑦1 2 + 4𝑦2 2 + 4𝑦3 2
2 3 + 4𝑖
13. Find the Eigen values and Eigen Vectors of 𝐴 = [ ]
3 − 4𝑖 2
Sol :
|𝐴 − 𝜆𝐼 | = | 2 − 𝜆 3 + 4𝑖 |
=0
3 − 4𝑖 2−𝜆
𝜆2 − 4𝜆 + (−2𝑖 ) = 0
𝜆 = −3,7
( Eigen Values of Hermitian Matrix is real)
For 𝜆 = −3
3 + 4𝑖 ⌉ ⌈𝑥1 ⌉
⌈ 5
3+4𝑖
𝑥 = 0 ==> 𝑥1 = − ( 5 ) 𝑥2
3 − 4𝑖 5 2
Exercise
10. Find the eigenvector corresponding to the largest Eigen value of the matrix
4 3 1
𝐴 = [0 5 2 ]
0 0 8
8 4
11. The Eigen values and Eigen vectors of B 2 A (1 / 2) A 3I , where A
2
.
2 2
12. Prove that the Eigenvalue of a skew Hermitian matrix are purely imaginary or zero
43
11 4 7
(i) A 7 2 5 and hence find A 5 .
10 4 6
7 −2 1
14. Diagonalize the matrix 𝐴 = [−2 10 −2]
1 −2 7
15. Determine the nature of the quadratic form Q(X) = 17 x2 – 30 x y + 17 z2
17. Reduce the quadratic form to sum of squares form (canonical form) and find the corresponding
linear transformation. Also find the index and signature.
(a) 6x12 3x22 3x32 4x1 x2 2x2 x3 4x1 x3 .
(b) 4x2 + 3y2 + z2 – xy – 6yz + 4xz. [ ans 4y12 – y22 + y32]
−𝑖 0 0
18. Find the eigen values and Eigen Vectors of 𝐴 = [ 0 0 −𝑖 ]
0 −𝑖 0
0 0 1
Ans: 𝜆 = −𝑖, 𝑖, 𝑖 𝑋1 = ⌈ 1 ⌉ 𝑋2 = ⌈1⌉ 𝑋3 = ⌈0⌉
−1 1 0
44
Unit-III
Matrix Decomposition and Least squares solution of algebraic system
LU Decomposition:
Let A be a square matrix. An LU factorization refers to the factorization of A, with proper row and/or
column orderings or permutations, into two factors – a lower triangular matrix L and an upper triangular
matrix U: A=LU
In the lower triangular matrix all elements above the diagonal are zero, in the upper triangular matrix, all
the elements below the diagonal are zero. For example, for a 3 × 3 matrix A, its LU decomposition looks
like this:
Without a proper ordering or permutations in the matrix, the factorization may fail to materialize. For
example, it is easy to verify (by expanding the matrix multiplication) that 𝑎11 = 𝑙11 𝑢11 . If 𝑎11 = 0, then
at least one of 𝑙11 and 𝑢11 has to be zero, which implies that either L or U is singular. This is impossible if
A is nonsingular (invertible). This is a procedural problem. It can be removed by simply reordering the
rows of A so that the first element of the permuted matrix is nonzero. The same problem in subsequent
factorization steps can be removed the same way
1. Solve the following equations using (LU decomposition method) Crout's method.
𝒙𝟏 + 𝒙𝟐 + 𝒙𝟑 = 𝟏, 𝟒𝒙𝟏 + 𝟑𝒙𝟐 − 𝒙𝟑 = 𝟔 & 𝟑𝒙𝟏 + 𝟓𝒙𝟐 + 𝟑𝒙𝟑 = 𝟒
Solution:
The given system of equations can be written as AX=B
1 1 1 𝑥1 1
where A=[4 3 −1] ; 𝑋 = [𝑥2 ] & 𝐵 = [6]
3 5 3 𝑥3 4
Using the LU decomposition method, Choose LU=A
𝑙11 0 0 1 𝑢12 𝑢13
where 𝐿 = [𝑙21 𝑙22 0 ] 𝑎𝑛𝑑 𝑈 = [0 1 𝑢23 ]
𝑙31 𝑙32 𝑙33 0 0 1
𝐿𝑈 = 𝐴
𝑙11 0 0 1 𝑢12 𝑢13 1 1 1
[𝑙21 𝑙22 0 ] [0 1 𝑢23 ] = [4 3 −1]
𝑙31 𝑙32 𝑙33 0 0 1 3 5 3
45
Multiplying and comparing the corresponding elements then we get
𝑙11 = 1; 𝑙11 𝑢12 = 1 ⇒ 𝑢12 = 1 ; 𝑙11 𝑢13 = 1 ⇒ 𝑢13 = 1; 𝑙21 = 4;
𝑙21 𝑢12 + 𝑙22 = 3 ⇒ 𝑙22 = −1; 𝑙21 𝑢13 + 𝑙22 𝑢23 = −1 ⇒ 𝑢23 = −10;
𝑙31 = 3; 𝑙31 𝑢12 + 𝑙32 = 5 ⇒ 𝑙32 = −1; 𝑙31 𝑢13 + 𝑙32 𝑢23 + 𝑙33 = 3 ⇒ 𝑙33 = −10;
1 0 0 1 1 1
𝐿 = [4 −1 0 ] 𝑎𝑛𝑑 𝑈 = [0 1 5] ;
3 2 −10 0 0 1
Now consider 𝐿𝑌 = 𝐵
1 0 0 𝑦1 1
[4 −1 0 ] [𝑦2 ] = [6]
3 2 −10 𝑦3 4
1
Comparing the corresponding elements, we get 𝑦1 = 1 , 𝑦2 = −2 𝑎𝑛𝑑 𝑦3 = −
2
1
𝑌 = [−21]
−2
Now considering the matrix equation 𝑈𝑋 = 𝑌, 𝑤𝑒 𝑔𝑒𝑡
1
1 1 1 𝑥1
−2
[0 1 5 ] [𝑥 2 ] = [ 1 ]
0 0 1 𝑥3 −
2
1 1
By solving we get 𝑥1 = 1, 𝑥2 = 2 & 𝑥3 = − 2
Solution:
The given system of equations are
4𝑥 − 𝑦 − 𝑧 = 3; −𝑥 + 4𝑦 − 3𝑧 = −0.5 ; −𝑥 − 3𝑦 + 5𝑧 = 0
4 −1 −1 𝑥 3
Then 𝐴 = [−1 4 −3] ; 𝑋 = [𝑦 ] & 𝐵 = [−0.5]
−1 −3 5 𝑧 0
𝐿𝑒𝑡 𝐴 = 𝐿𝑈
1) Solve x + y = 1,x + 2y = 2, x + 3y = 2
1 1 1
Sol: Here A = [1 2] and B = [2]
1 3 2
47
1 1 𝑥 1
[1 2] [𝑦 ]= [2]
1 3 𝑧 2
1 1 1
[𝐴𝐵] = [1 2 2]
1 3 2
𝑅2 → 𝑅2 − 𝑅1 , 𝑅3 → 𝑅3 − 𝑅1
1 1 1
[𝐴𝐵] ~ [0 1 1]
0 2 1
𝑅3 → 𝑅3 − 2𝑅2
1 1 1
[𝐴𝐵] ~ [0 1 1]
0 0 −1
Here 𝜌(𝐴) = 2 𝑎𝑛𝑑 𝜌(𝐵) = 3
y=a+bx
a+b=1
a + 2 b =2
a+3b=2
To project 𝑏̅ on to 𝑎̅ ,we will take projection 𝑝̅ on to 𝑎̅ at right angle, then error e = b – p and clearly p is
𝑎 then we have 𝑎̅ ⊥ e then 𝑎𝑇 𝑒 = 0.(<a,e> = 0)
some multiple of 𝑎̅ . let 𝑝̅ = x̅̅̅
i.e. 𝑎𝑇 (b – a x) = 0
𝑎𝑇 b – 𝑎𝑇 a x = 0
𝑎𝑇b
x = 𝑎𝑇 a b
Then we have the projection 𝑝̅ = 𝑥 𝑎̅ e=b-p
𝑎𝑇b
P = 𝑎𝑇a 𝑎̅
Where P is projection a p
Similarly for 2 vectors in a plane ,we have
P = 𝑥̂ 𝑎1 + 𝑥̂ 𝑎2
P = 𝑥̂ 𝐴
𝑎1 ⊥ e ⇒ 𝑎1 𝑇 e = 0
𝑎2 ⊥ e ⇒ 𝑎2 𝑇 e = 0
48
𝑎1 𝑇 0
[ [ ]
𝑇 ] 𝑒 = [0]
𝑎2
𝐴𝑇 ( 𝐵 − 𝐴 𝑥̂ ) = 0
𝐴𝑇 𝐵 − 𝐴𝑇 𝐴𝑥̂ = 0
𝐴𝑇 B
𝑥̂ = 𝑇
𝐴 A
𝐴𝑇 B
P = 𝐴𝑇 A A
49
Example : Solve x = 3 , x + y = 4 , x + 2y = 1.
1 0 3
Sol : Here A = [1 1] and B = [4]
1 2 1
For over determined problem we should solve 𝐴𝑇 A 𝑥̂ = 𝐴𝑇 B ……… .(1)
1 0
1 1 1 3 3
Now 𝐴𝑇 A = [ ] [1 1] = [ ]
0 1 2 3 5
1 2
3
1 1 1 8
𝐴𝑇 B = [ ] [4 ] = [ ]
0 1 2 6
1
3 3 𝑥̂ 8
From (1) [ ][ ] = [ ]
3 5 𝑦
̂ 6
3 3 8
Augmented matrix [𝐴𝐵] = [ ]
3 5 6
3 3 8
𝑅2 → 𝑅2 − 𝑅1 , [𝐴𝐵] ∼ [ ]
0 2 −2
⇒ 3 𝑥̂ + 3𝑦̂ = 8 and 2𝑦̂ = −2 ⇒ 𝑦̂ = −1 then 𝑥̂ = 11/3
Hence the solution is 𝑥̂ = 11/3 , 𝑦̂ = −1
Q-R FACTORIZATION
Any real square matrix A may be decomposed as A=QR, Where Q is an orthogonal matrix (its columns
are orthogonal unit vectors) and R is an upper triangular matrix (also called right triangular matrix). If A
is invertible, then the factorization is unique if we require the diagonal elements of R to be positive.
If instead A is a complex square matrix, then there is a decomposition A = QR where Q is a unitary matrix
.
If A has n linearly independent columns, then the first n columns of Q form an orthonormal basis for the
column space of A. More generally, the first k columns of Q form an orthonormal basis for the span of the
first k columns of A for any 1 ≤ k ≤ n. The fact that any column k of A only depends on the first k columns
of Q is responsible for the triangular form of R.
Let A be the matrix with independent columns, then A can be written as QR where Q is an orthogonal
matrix and R is an upper triangular matrix.
𝑞1 𝑇 a 𝑞1 𝑇 𝑏 𝑞1 𝑇 𝑐
Where R = [ 0 𝑞2 𝑇 𝑏 𝑞2 𝑇 𝑐 ]
0 0 𝑞3 𝑇 𝑐
50
1 −1 4
Example 1 : Find the QR factorization for A = [ 1 4 − 2 ]
1 4 2
1 −1 0
1 −1 4
1 4 −2
Sol: Let a = [ ] , b = [ ] , c = [ ]
1 4 2
1 1 0
1
1
A=a =[ ]
1
1
−1 1
4 1
𝐴𝑇 𝑏 = [1 1 1 1] [ ] = 6 and 𝐴𝑇 𝐴 = [1 1 1 1] [ ] = 4
4 1
−1 1
5
−2
−1 1 5
𝐴𝑇 b 4 3 1 2
B= b - 𝐴𝑇 A A=[ ]−2[ ]= 5
4 1
2
−1 1 5
[− 2 ]
4 4
−2 5 5 5 5 −2
𝐴𝑇 𝑐 = [1 1 1 1] [ ] = 4 , 𝐵𝑇 𝑐 = [− 2 − 2 − 2 − 2] [ ] = -10 ,
2 2
−0 0
−5/2
5 5 5 5/2
𝐵𝑇 𝐵 = [− 2 − 2 − 2 − 5/2] [ ] = 25
5/2
−5/2
4 1 −5/2 4 −1 1 2
𝐴𝑇 c 𝐵𝑇c −2 1 (−10) 5/2 −2 1 1 −2
C = c - 𝐴𝑇 A A - 𝐵𝑇B B = [ ] - [ ] – 25 [ ] = [ ] + [ ] –[ ] = [ ]
2 1 5/2 2 1 1 2
0 1 −5/2 0 −1 1 −2
1 −5/2 2
1 5/2 −2
A=[ ] B=[ ] C= [ ]
1 5/2 2
1 −5/2 −2
Here ∥ 𝐴 ∥ = 2 , ∥ 𝐵 ∥ = 5 , ∥𝐶∥=4
51
1/2 −1/2 1/2
1/2 1/2 −1/2
𝑞1 = [ ] , 𝑞2 = [ ] , 𝑞3 = [ ]
1/2 1/2 1/2
1/2 −1/2 −1/2
1/2 −1/2 1/2
1/2 1/2 −1/2
Q=[ ]
1/2 1/2 1/2
1/2 −1/2 −1/2
𝑞1 𝑇 a 𝑞1 𝑇 𝑏 𝑞1 𝑇 𝑐
R=[ 0 𝑞2 𝑇 𝑏 𝑞2 𝑇 𝑐 ]
0 0 𝑞3 𝑇 𝑐
1
1 1 1 1 1
𝑞1 𝑇 a = [2 ] [ ]=2
2 2 2 1
1
−1
1 1 1 1 4
𝑞1 𝑇 𝑏 = [2 ][ ] = 3
2 2 2 4
−1
4
1 1 1 1 −2
𝑞1 𝑇 𝑐 = [2 ][ ] = 2
2 2 2 2
0
𝟏 𝟐 𝟐
Example 2 : Find the QR-Factorization of 𝑨=[−𝟏 𝟏 𝟐]
−𝟏 𝟎 𝟏
𝟏 𝟏 𝟐
Sol : First we note that the columns {x1, x2, x3} of the matrix A are linearly independent set.
Use Gram-Schmidt process to find an orthonormal vectors {u1, u2, u3} by using {x1, x2, x3} vectors.
Then we get
1 3/2 −1/2
−1 3/2
𝑢1 = [ ] , 𝑢1 = [ ] 𝑎𝑛𝑑 𝑢1 = [ 0 ]
−1 1/2 1/2
1 1/2 1
Then normalize these vectors 𝑢1 , 𝑢2 𝑎𝑛𝑑 𝑢3 we get
52
1/2 3/√5 −1/√6
𝑢1 −1/2 𝑢2 3/√5 𝑢3 0
𝑒1 = =[ ], 𝑒2 = = 𝑎𝑛𝑑 𝑒3 = =
‖𝑢1 ‖ −1/2 ‖𝑢2 ‖ 1/√5 ‖𝑢3 ‖ 1/√6
1/2 [ 2/√6 ]
[1/√5]
1/2 3/√5−1/√6
−1/2 3/√5 0
So, Q =
−1/2 1/√5 1/√6
[ 1/2 1/√5 2/√6 ]
Let A = QR be the required factorization where Q is orthogonal matrix
(QTQ= I)and R is an invertible upper triangular matrix i.e., A = QR
Therefore, QTA= QT (QR) = (QTQ)R = IR = R
1/2 −1/2−1/2 1/2 1 2 2 2 1 1/2
𝑇
Then we compute 𝑅 = 𝑄 𝐴 = [ 3/√5 3/√5 1/√5 1/√5] [−1 1 2] = [0 √5 3√5/2]
−1 0 1
−1/√6 0 1/√6 2/√6 1 1 2 0 0 √3/2
1/2 3/√5−1/√6
2 1 1/2
−1/2 3/√5 0
∴ 𝐴 = 𝑄𝑅 = [0 √5 3√5/2]
−1/2 1/√5 1/√6
1/2 0 0 √3/2
[ 1/√5 2/√6 ]
𝟏 −𝟏 𝟐
Example 3 : Find the QR-factorization of 𝑨 = [𝟎 𝟏 𝟑]
𝟑 −𝟑 𝟒
Sol : First we note that the columns {x1, x2, x3} of the matrix A are linearly independent set.
Use Gram-Schmidt process to find an orthonormal vectors {u1, u2, u3} by using {x1, x2, x3} vectors.
Then we get
1 0 3/5
𝑢1 = [0] , 𝑢1 = [1] 𝑎𝑛𝑑 𝑢1 = [ 0 ]
3 0 −1/5
Then normalize these vectors 𝑢1 , 𝑢2 𝑎𝑛𝑑 𝑢3 we get
𝑢1 1/2 𝑢2 0 𝑢3 3/√10
𝑒1 = = [ 0 ], 𝑒2 = = [1] 𝑎𝑛𝑑 𝑒3 = = [ 0 ]
‖𝑢1 ‖ ‖ 𝑢2 ‖ ‖𝑢3 ‖
3/2 0 −1/√10
53
1/2 0 3/√10
So, Q = [ 0 1 0 ]
3/2 0 −1/√10
1/20 3/√10
∴ 𝐴 = 𝑄𝑅 = [ 0 1 0 ]
3/20−1/√10
Suppose M is a m × n matrix whose entries come from the field K, which is either the field of real
numbers or the field of complex numbers. Then there exists a factorization, called a singular value
decomposition of M, of the form
Where
The diagonal entries σi of Σ are known as the singular values of M. A common convention is to list the
singular values in descending order. In this case, the diagonal matrix, Σ, is uniquely determined by M
(though not the matrices U and V if M is not square).
𝟏 𝟏 𝟎
1. Find the singular value decomposition for the matrix 𝑨 = [ ]
𝟎 𝟎 𝟏
1/√20−1/√2
Thus 𝑉 = [1/√20 1/√2 ] and Σ = [√2 0 0]
0 1 0
0 1 0
To find U, we compute
1 1 1 0
𝑢1 = 𝜎 𝐴𝑣1 = [ ] and 𝑢2 = 𝜎 𝐴𝑣2 = [ ]
1 0 2 1
Therefore,
1 0
𝑈=[ ]
0 1
We can easily verifies that, 𝑈Σ𝑉 𝑇 = 𝐴
𝑇
1/√20−1/√2
1 0 √2 0 0] [ 1 1 0
𝑈Σ𝑉 𝑇 = [ ][ 1/√20 1/√2 ] = [ ]=𝐴
0 1 0 1 0 0 0 1
0 1 0
𝟏 𝟏
2. Find the singular value decomposition for the matrix 𝑨 = [𝟏 𝟎]
𝟎 𝟏
2 1
Sol :Compute 𝐴𝑇 𝐴 = [ ] has eigen values 𝜆1 = 3, 𝜆2 = 1 and the corresponding eigen
1 2
1 −1
vectors are [ ] [ ]
1 1
The singular values of A are 𝜎1 = √𝜆1 = √3, 𝜎2 = √𝜆1 = √1
1/√2 −1/√2
Normalized vectors are 𝑣1 = [ ] , 𝑣2 = [ ]
1/√2 1/√2
1/√2−1/√2 √3 0
So Thus 𝑉 = [ ] and Σ = [ 0 1]
1/√2 1/√2
0 0
2/√6 0
1 1
Also we can find 𝑢1 = 𝜎 𝐴𝑣1 = [1/√6] and 𝑢2 = 𝜎 𝐴𝑣2 = [−1/√2] and using Gram-
1 2
1/√6 1/√2
−1/√3
Schmidt process we can find 𝑢3 = [ 1/√3 ] which is orthogonal to 𝑢1 𝑎𝑛𝑑 𝑢2 and take
1/√3
matrix
55
2/√6 0 −1/√3
𝑈 = [1/√6−1/√2 1/√3 ]
1/√6 1/√2 1/√3
Hence verify that
2/√6 0 −1/√3 √3 0 𝑇 1 1
1/√2−1/√2
𝑈Σ𝑉 = [1/√6−1/√2 1/√3 ] [ 0 1] [
𝑇 ] = [1 0] = 𝐴
1/√2 1/√2 0 1
1/√6 1/√2 1/√3 0 0
If the columns of a matrix A are linearly independent, so AT· A is invertible and we obtain with the
following formula the pseudo inverse:
A+ = (AT · A)-1 · AT
However, if the rows of the matrix are linearly independent, we obtain the pseudo inverse with the
formula:
A+ = AT· (A · A T) -1
1 2 1 3
Example : Find the pseudoinverse of A
4 3 2 1
Sol :
1 2 1 3
A
4 3 2 1
1 2
let 3 8 5 0
4 3
rank ( A) 2 no. of rows
56
then pseudo inverse is A AT ( AAT ) 1
1 2 1 3
A
4 3 2 1
1 4
2 3
AT
1 2
3 1
1 4
3 15 15
1 2 1 3 2
A. A
T
4 3 2 1 1 2 15 30
3 1
| A. AT | 15(30 15) 225
1 30 15 2 /15 1/15
( A. AT ) 1
225 15 15 1/15 1/15
1 4 2 /15 3 /15
2 3 2 /15 1/15 1/15 1/15
AT ( A. AT )
1
1 2 1/15 1/15 0 1/15
3 1 5 /15 2 /15
2 /15 3 /15
1/15 1/15
A
0 1/15
5 /15 2 /15
Exercise
3 −1
1. Find the matrix of singular values of the matrix 𝐴 = [ ]
2 4
4 4
2. Find the matrix of singular values of the matrix 𝐴 = [ ]
−3 3
1 1 0
3. Perform a full SVD (singular value decomposition) of the matrix 𝐴 = [ ]
0 1 1
3 1 1
4. Perform a full SVD (singular value decomposition) of the matrix 𝐴 = [ ]
−1 3 1
1 2
5. Compute the matrix cos(𝐴) for the matrix 𝐴 = [ ] use 2 decimal approximation
3 2
6. Determine the nature of the quadratic form Q(X) = 17 x2 – 30 x y + 17 z2
57
1 1 1
1 −1 2
7. Perform a QR factorization of the matrix 𝐴 = [ ] by the Gram Schmidt process
−1 1 0
1 5 1
1 2 2
−1 1 2
8. Perform a QR factorization of the matrix 𝐴 = [ ] by the Gram Schmidt process
−1 0 1
1 1 2
1 2
9. Find the Moore-Penrose pseudo-inverse of the matrix 𝐴 = [2 1 ]
1 −1
1 2 𝑥 5
10. Find the least squares approximate solution of the over determined system [2 1 ] [𝑦 ] = [ 4 ]
1 −1 𝑧 −1
1 1 𝑥 1
11. Find the least squares approximate solution of the over determined system [1 2] [𝑦] = [2]
1 3 𝑧 2
13. Using Choleski method solve the system of equations 16𝑥 + 4𝑦 + 4𝑧 − 4𝑤 = 32; 4𝑥 + 10𝑦 +
4𝑧 + 2𝑤 = 26; 4𝑥 + 4𝑦 + 6𝑧 − 2𝑤 = 20; −4𝑥 + 2𝑦 − 2𝑧 + 4𝑤 = −6.
58
Unit - IV
Multivariable differential calculus and Function Optimization
Partial Differentiation
Introduction: In mathematics, sometimes the function depends on two or more variables. In this case the
concept of partial derivative arises. Generally partial derivatives are used in vector calculus and
differential geometry.
Functions of Two Variables
If there are 3 variables, say x, y, z and the value of 𝑧 depends upon the value of x, y , then 𝑧 is called a
function of two variables 𝑥 and y .
The derivative of z with respect to x treating y as constant is called partial derivative of z with respect to
x.
z z f ( x h, y ) f ( x, y )
It is denoted by or z x and defined as z x hlt0
x x h
59
z
Similarly the derivative of z with respective to y treating x as constant is denoted by or z y and
y
defined as
𝜕𝑧 𝑓(𝑥, 𝑦 + 𝑘) − 𝑓(𝑥, 𝑦)
= 𝑧𝑦 = lim
𝜕𝑦 𝑘→0 𝑘
z z
, are called first order partial derivatives of z
x y
Notation
The second order partial derivatives of 𝑧 = 𝑓(𝑥, 𝑦) are
2 z z 2 z z
zxx , z ,
x 2 x x yy
y 2 y y
2 z z 2 z z
z xy , z yx
xy x y yx y x
Composite Functions
Chain Rule
z z u u
If z f (u ) where u is a function of variables x and y then f (u )
x u x x
z z u u
Similarly f (u )
y u y y
Composite function of one variable ( Total differential coefficient)
Let u f ( x, y ) where x (t ) , y (t ) then u is function in t and is called composite function of a
du u x u y
single variable t and is called total differential of u
dt x t y t
Similarly
If u f ( x, y, z ) is a composite function, where x x(t ) , y y (t ) , z z (t ) then
du u x u y u z
dt x t y t z t
If z f ( x, y ) is a composite function, where x x(u, v) , y y (u , v) then
60
z z x z y z z x z y
,
u x u y u v x v y v
Jacobian
If u f ( x, y ) and v ( x, y ) be two continuous functions of the independent variables x and y such that
u x , u y , vx , v y are also continuous in x and y
ux u y
The Jacobian of u , v with respect to x, y is defined as J
vx u y
Notation
u, v (u, v)
J
x, y ( x, y )
Similarly the Jacobian of u , v, w with respective to x, y, z is defined as
ux u y uz
(u, v, w) u , v, w
( x, y , z )
=J vx v y vz
x , y , z
wx wy wz
Important application of Jacobian is in connection with the change of variables in multiple integrals
Properties of the Jacobian
If J1 is Jacobian of u , v with respect to x, y and J 2 is Jacobian of x, y with respect to u , v then J1 J 2 1
(u, v) (u, v) ( x, y)
If u , v are functions of x, y when x, y are functions of r , s then
(r , s) ( x, y) (r , s)
(u, v, w)
If u , v, w are functions of x, y, z and u , v, w are not independent(dependent) then 0
( x, y , z )
Gradient vector
𝜕𝑓
𝜕𝑥
The vector ∇𝑓 = [𝜕𝑓 ] represents the Gradient vector associated with the function 𝑧 = 𝑓(𝑥, 𝑦).At each point
𝜕𝑦
P in its domain, f increases most rapidly in the direction of the gradient vector ∇𝑓 at P. Geometrically, the
gradient vector represents the surface normal vector to a given surface.
61
∇𝑓
z = f(x,y)
Hessian matrix
The Hessian matrix is the Jacobian matrix of second order partial derivatives of a function.
The determinant of the Hessian matrix is also referred to as the Hessian.
𝜕 2𝑓 𝜕 2𝑓
𝜕𝑥 2 𝜕𝑥𝜕𝑦 𝑟 𝑠
For a two variable function, the Hessian matrix defined by 𝐻 = ( 𝜕2𝑓 )≡( )
𝜕 2𝑓 𝑠 𝑡
𝜕𝑥𝜕𝑦 𝜕𝑦 2
Function Optimization
(A) Unconstrained optimization using the Hessian matrix
Second derivative test for a function of two variables:
Let a function 𝑓(𝑥, 𝑦) be continuous and possess first and second order partial derivatives at a point
𝑝(𝑎, 𝑏) where 𝑓𝑥 (𝑎, 𝑏) = 0 and 𝑓𝑦 (𝑎, 𝑏) = 0 (i.e., (a,b) is a critical point of f)
(a) If 𝐷1 (𝑎, 𝑏) > 0 and 𝐷2 (𝑎, 𝑏) > 0 then H is positive definite and f has a relative minimum at (a,b)
62
(b) If 𝐷1 (𝑎, 𝑏) < 0 and 𝐷2 (𝑎, 𝑏) > 0 then H is negative definite and f has a relative maximum at (a,b)
(c) If 𝐷2 (𝑎, 𝑏) < 0 then H is indefinite and f has a saddle point (a,b)
(a) If 𝐷1 > 0 , 𝐷2 > 0 & 𝐷3 > 0 then H is positive definite and f has a relative minimum at (a,b,c) .
(b) If 𝐷1 < 0 , 𝐷2 > 0 & 𝐷3 < 0 then H is negative definite and f has a relative maximum at (a,b,c).
(c) If any other case where 𝐷3 ≠ 0 then H is indefinite and f has saddle point at (a,b,c) .
F f ( x, y, z ) ( x, y, z ) ---------------(1)
f
ie 0 ---------------------(2)
x x
63
f
0 ----------------------(3)
y y
f
0 -----------------------(4)
z z
On solving (1),(2),(3) and (4) we can find the volues of x,y,z and for which f(x,y,z) has stationary
value.
2u 2u 2u
1).If u
1
, x 2
y 2
z 2
0 then show that 2 2 2 0
x2 y 2 z 2 x y z
1
1 2 2 2 32
3
Solution:Given that u ( x y z ) then ux , ux x y z x
2 2
2 2 2 2 x y z (2 x ) 2 2
2
5
Similarly uxx x 2 y 2 z 2 2x y2 z2
2
2
5
u yy x 2 y 2 z 2 2y x2 z 2
2
2
5
u zz x y z
2 2
2z
2 2 2
y 2 x2
Then we get u xx u yy u zz 0
u 1 1 y
Solution: Given that u tan 1 ( x y) then
x y x
2
x y2
2
1
y
Similarly we obtain
u x 2u x2 y 2 2u x2 y 2 2u 2u
2 2 , ,
y x y yx x 2 y 2 2 xy x 2 y 2 2 xy yx
2 z 2 z
2
3). If z f ( x ay ) ( x ay ) prove that a
y 2 x2
64
Solution: Given that z f ( x ay ) ( x ay ) then we get
z x f ( x ay ) ( x ay )
z xx f ( x ay ) ( x ay )
z y a f ( x ay ) a ( x ay )
z yy a 2 f ( x ay ) ( x ay )
2 z 2 z
2
Clearly we have a
y 2 x2
1 x
4). If x r cos , y sin show that r
r x
x
Solution: Given that x r cos , y sin then r sin ,
r sin
x x
tan 1 ( x y ) = 2 =
r
y
r2
sin 1 x
= . Therefore r . Hence proved
x r r x
(u, v, w)
5) If u x 2 2 y, v x y z, w x 2 y 3z then find
( x, y , z )
u x 2 x, u y 2, uz 0
Solution: We have u x 2 y, v x y z, w x 2 y 3z then vx 1,
2
v y 1, vz 1
wx 1, wy 2, wz 3
ux u y uz 2 x 2 0
(u, v, w)
v x v y vz 1 1 1 = 10x 4
( x, y , z )
wx wy wz 1 2 3
x y
6) Prove that u , v tan 1 x tan 1 y are functionally dependent.
1 xy
x y
, we have ux 1 y 2 , u y 1 x 2
2 2
Solution: Given u
1 xy 1 xy 1 xy
1
From v tan 1 x tan 1 y , we have vx 1
, vy
1 x 2
1 y2
65
1 y2 1 x2
ux u y
1 xy 1 xy
2 2
We have J 0
vx u y 1 1
1 x2 1 y2
( x, y , z )
7). If u x y z, uv y z, uvw z then prove that = u 2v
(u, v, w)
Solution: Given that u x y z, uv y z, uvw z
xu xv xw 1 v u 0
( x, y , z )
Then yu yv yw v vw u uw uv u 2 v .
(u, v, w)
wu wv ww vw uw uv
8). If x uv , y
u
then show that JJ 1
v
x x y 1 y
Solution: Given that x uv , y
u
then v, u , ,
u
2 Now
v u v u v v v
u v
xu yu 2u
J 1 u
xv yv 2 v
v v
u 2 xy and v 2
x u y u x v 1 v
x
, 2
But we have also , ,
y x 2u y 2u x 2vy y 2vy
y x
ux u y 2u 2u v .
J Therefore JJ 1 .
vx uy 1 x 2u
2
2vy 2vy
66
9) Find the stationary values by Lagrange’s method of multipliers for f x y z and xyz a3
Sol ution: Given that f ( x, y, z ) x y z , ( x, y, z ) xyz a3
F F F
We set 0
x y z
F f 1
0 1 yz 0
x x x yz
1 1
Similarly we get and
xz xy
𝑓(𝑎, 𝑎, 𝑎) = 𝑎 + 𝑎 + 𝑎 ≡ 3𝑎
( x, y ) (r , )
10) If x r cos , y r sin , find the Jacobian J and J and hence show that
(r , ) ( x, y )
JJ 1
x y x y
Solution: Given x r cos , y r sin . Then cos , sin , r sin , r cos
r r
x x
r cos r sin
J r cos 2 r sin 2 r
y y sin r cos
r
r r
x y
Now J
x y
r x r y y x
, , 2 , 2
x x y
2 2 y x y
2 2 x x y 2
y x y 2
67
x y
x y
2 2
x y2
2
1 1
J JJ r 1
y x r r
x y2
2
x y2
2
x y xy (u , v )
11) If u ,v then find are u and v are functionally related ?
x y ( x y) 2
( x, y )
x y xy (u, v) ux u y
Solution: Given that u ,v then
x y ( x y)2 ( x, y ) vx u y
2y 2x y( x y) x( x y )
ux , uy , vx , vy
( x y) 2
( x y) 2
( x y) 3
( x y )3
2y 2x
( x y) ( x y)2
2
12) Among the points (6,0) and (5,1) , which of them is a saddle point for the function
f ( x, y) x 3 3xy2 15x 2 15 y 2 72 x ?
6 x 30 6 y 0 6
In this case, H at (5, 1)
6y 6 x 30 6 0
Solution :Given x x y y z z c
68
z z 1 log x
Differntiate w.r.to x , we get (1 log x) (log z 1) 0 ----(2)
x x 1 log z
z 1 log y
Similarly Differentiate w.r.to y, we get ---------------(3)
y 1 log z
2 z z 1 log y
Now Differentiate equation (3) w.r.to x we get [ ] [ ]
xy x y x 1 log z
1 1 z
= (1 log y )
(1 log z ) z x
2
z 1 log y 1 log x
Putting the value of we get= ( )
x z (1 log z ) 2
1 log z
Now at x=y=z
1 1 1
= = =
x(1 log x) x(log e log x) x(log ex)
= x(log ex)1 .
u u u
2. If 𝒖 = 𝒇(𝒆𝒚−𝒛 , 𝒆𝒛−𝒙 , 𝒆𝒙−𝒚 )then prove that 0.
x y z
u f (e y z , e z x , e x y ) ------------(1)
Let X e y z , Y e z x , Z e x y
By Chain Rule
u u X u Y u Z
x X x Y x Z x
u u u x y
= (0) ( e z x ) (e )
X Y Z
u u x y
= ( e z x ) (e ) -----------(2)
Y Z
69
u u y z u
(e ) (e x y ) ------------(3)
y X Z
u u u z x
( e y z ) (e ) ---------------(4)
z X Y
Adding equations (2),(3) and (4)we get
u u u u z x u x y u y z u x y u y z u z x
= (e ) (e ) (e ) (e ) (e ) (e )
x y z Y Z X Z X Y
=0
u u u
Therefore 0 .
x y z
u u 2 u
2
2u 2 u
2
u 2u
3. If 𝒙 = 𝒓𝒄𝒐𝒔𝜽, 𝒚 = 𝒓𝒔𝒊𝒏𝜽 prove that x y y 2 xy x r 2 .
x y x 2 xy y 2 r
u u x u y
r x r y r
u u
= cos sin
x y
u u x u y u u
= r sin r cos
x y x y
r sin r cos
Let x y
2u u
[ ]
2
u u
= (r sin r cos ) (r sin r cos )
x y x y
2u 2u 2u
= r sin 2 2r sin cos
2 2 2
r cos 2 -------(2)
2 2
x xy y
Now
u 2u u u 2u 2u 2u
r 2 r[cos sin ] r 2 sin 2 2 2r 2 sin cos r 2 cos 2 2
r x y x xy y
70
u u 2 2 2u 2u 2u
= r cos r sin r sin 2 2r sin cos
2
r cos 2
2 2
x y x xy y
u u 2 u
2
2u 2 u
2
=x y y 2 xy x
x y x 2 xy y 2
Hence proved.
4. A function f(x,y)is written in terms of a new variables u e x cos y , v e x sin y then show that
f f f f f f
u v , v u
x u v y u v
We have
f f u f v
f f f f
x u x v x = u v =u v ----------(1)
u v u v
f f u f v f f f f
Similarly = (v) (u ) = v u ------------(2)
y u y v y u v u v
Therefore
f f f
u v
x u v
f f f
v u
y u v
5. Show that the dependent variables in the following transformation are functionally dependent
and also establish the relation.
u xe y sin z, v xe y cos z, w x 2 e 2 y
71
sin z cos z 2x
e 4y
x sin z x cos z 2 x 2 e4 y (0) 0
x cos z x sin z 0
Hence u and v are functionally related. The relation between them can be found as
follows
u 2 v 2 x 2 e 2 y (1) w
6. Divide 24 into 3 parts such that the continued product of the first ,the square of the second
and cube of the third may be maximum.
We know that F f ( x, y, z ) ( x, y, z )
Fx 0 y 2 z 3 0 => y 2 z 3 ------------(2)
7. The temperature T at any point (x,y,z) in space is T 400 xyz 2 .Find the highest temperature
on the surface of the unit sphere 𝒙𝟐 + 𝒚𝟐 + 𝒛𝟐 = 𝟏
We know that F f ( x, y, z ) ( x, y, z )
200 yz 2
Now Fx 0 400 yz 2 2 x 0 => -------------(2)
x
72
200 xz 2
Fy 0 400 xz 2 y 0
2
--------------(3)
y
x2 x2 2x2 1
4x2 1
1
x
2
1 1 2
y and z . Required Maximum =. 400(1⁄2)(1⁄2)(1⁄√2) ≡ 50
2 2
Solution:Given that f ( x, y) 2 2 x 2 y x 2 y 2
f x 2 2 x, f y 2 2 y, f xx 2, f xy 0 and f yy 2
73
−6 0
(b) At (0,2) , If 𝐷1 (0,2) = 6 > 0 and 𝐷2 (0,0) = [ ] = 36 > 0 then f has a relative minimum
0 −6
at (0,2). The minimum value is f(0,2) = -3.
0 ±6
(c) At (±1,1), 𝐷2 (±1,1) = [ ] = −36 < 0 then f has a saddle points at (±1,1).
±6 0
f ( x, y) x3 y3 3axy ---------------(1)
x2
f x 0 3x 2 3ay 0 y ---------(2)
a
f y 0 3 y 2 3ax 0 y ax -----------(3)
rt s 2 9a 2 27a 2
At (a,a) D1 0, D2 0 i.e,
74
f A cos B sin(2 A B)
2 f
r 2cos B cos(2 A B)
A2
2 f
s cos(2 A 2 B)
AB
2 f
t 2 cos A cos( A 2 B)
B 2
For Maxima and Minima
f f
0 and 0
A B
cos B sin(2 A B) 0 ---------------(2)
12. Find and classify the critical points of the function 𝒇(𝒙, 𝒚, 𝒛) = 𝒙𝟐 + 𝒚𝟐 + 𝟕𝒛𝟐 − 𝒙𝒚 − 𝟑𝒚𝒛
Solution:
We have 𝑓𝑥 = 2𝑥 − 𝑦 = 0, 𝑓𝑦 = 2𝑦 − 𝑥 − 3𝑧 = 0 & 𝑓𝑧 = 14𝑧 − 3𝑦 = 0
75
The minimum value of the function is 𝑓(0,0,0) = 0
13.Find the dimensions of rectangular box of maximum capacity whose surface area is given
when box is closed.
Solution:Let x,y,z be the length,breadth and height of the rectangular box respectively.
V xyz
Vx yz ,Vy xz ,Vz yx
Surface Area S 2( xy yz xz )
By Lagrange’s Method
V s
0 yz 2( y z ) 0 ------------(1)
x x
V s
0 xz 2( x z ) 0 ------------(2)
y y
V s
0 xy 2( x y ) 0 -------------(3)
z z
Solve equations (1),(2) and (3)
We get x=y=z .Thus length = Breadth=Height
Solution:
f ( x, y , z ) 2 x 3 y z
Given ( x, y, z ) x 2 y 2 5
( x, y , z ) x z 1
By Lagranges Method
f
0 2 2 x 0 -----------(1)
x x x
f
0 3 2 y 0 ------------(2)
y y y
76
f
0 1 (0) 0 -------------(3)
z z z
Solve equations (1) ,(2) and (3) we get
1
1,
2
1
x
2
3
y
2
1
z 1
2
1 3 1 1 3 1
The extreme points are ( , ,1 ) and ( , ,1 )
2 2 2 2 2 2
x2 y 2
15. Find the area of a greatest rectangle that can be inscribed in an ellipse 2 2 1 .
a b
Solution: Let ABCD be the rectangle .Let the coordinates of the point A(x, y)
AB=2x and BC= 2y , Area = A=(2x)(2y) =4xy
A A x2 y 2 2 x 2 y
4y , 4 x , 2 2 1, 2 and
x y a b x a y b2
By Lagranges Method
A 2x 2 ya 2
0 4 y 2 0 -----------(1)
x x a x
A 2y 2 xb2
0 4 x 2 0 -------------(2)
y y b y
Solve equations (1) and (2) we get
a b
x and y
2 2
a b
Required Area = 4*( )*( ) 2ab
2 2
77
Hence area of greatest rectangle inscribed in the ellipse = 2ab.
16. A rectangular box open at the top is to be designed to have a fixed capacity 4000 cft.
Determine its dimensions such that its surface area is a minimum using Lagrange’s Multipliers
Method.
Solution: Choose the dimensions of the box as x, y and z so that its volume and surface area are
respectively xyz and xy 2 yz 2 zx
Substituting in the equation xyz 4000 we get the critical point 20,20,10
Exercise
1. If x 2 y 2 z 2 u 2 , Prove that u xx u yy u zz 0 .
2. If x er cos , y er sin then show that uxx u yy e2r [urr u ] .
x y u u
3. If u sin 1 ( ) ta n 1 ( ) ,Then prove that x y 0.
y x x y
4. If x vw , y uw , z uv and u r sin cos , v r sin sin , w r cos Find
r 2 sin
( x, y , z )
? [Ans : 4 ]
(r , , )
78
5. Find J , J ' for x ev sec u, y ev tan u and hence show that JJ ' 1 .
1
[𝑨𝒏𝒔: 𝐽 = −𝑥 𝑒 𝑦 , 𝐽′ = − 𝑥𝑒 𝑦]
6. Show that the functions u x y z , v x 2 y 2 z 2 2 xy 2 yz 2 xz ,
w x3 y3 z 3 3xyz are functionally related and hence find the relation between them.
[Ans : 4w u 3 3uv ]
7. Find the maximum and minimum distance from the origin to the circle
5 x 2 6 xy 5 y 2 8 0 . [Ans :4,1]
8. Locate the stationary points and examine their nature of the following function
x 4 4 xy y 4 2 x 2 2 y 2
9. Find the extrema of the following functions
(a) 3𝑦 2 − 2𝑦 3 − 3𝑥 2 + 6𝑥𝑦
1 1
(b) 𝑥 + 𝑥𝑦 + 𝑦
(c) x4 y 4 z 4 4 xyz
10. Find the absolute Maximum and Minimum values for the
following functions in the closed region R
1
( 𝑀𝑎𝑥 𝑣𝑎𝑙. = 7 , 𝑀𝑖𝑛 𝑣𝑎𝑙 = − 27)
3
(𝑀𝑎𝑥 = 3 , 𝑀𝑖𝑛 𝑣𝑎𝑙 = − 2)
11. A rectangular box open at the top has constant surface area 108 sq.ft.Find its dimensions
such that its volume is maximum. [𝑨𝒏𝒔: 108 𝑐𝑓𝑡 𝑎𝑡 (6,6,3)]
12.The sum of three positive integers is 12.Find the maximum of the product of the first,
square of the second and the cube of the third. [𝑴𝒂𝒙 = 6912 𝑎𝑡 (2,4,6)]
13.Find the volume of the largest rectangular parallepiped that can be inscribed in the
ellipsoid of the revolution 4 x 2 4 y 2 9 z 2 36 .
[𝑴𝒂𝒙 𝒗𝒂𝒍. = 16√3 ]
14.The temperature at a point (x, y) on a metal plate is 𝑇(𝑥, 𝑦) = 4𝑥 2 − 4𝑥𝑦 + 𝑦 2 . An ant
on the plate walks around the circle of radius 5 centered at the origin, What are the highest
and lowest temperatures encountered by the ant?
15.Find the minimum value of the function 𝑓(𝑥, 𝑦, 𝑧) = 𝑥 2 + 𝑦 2 + 𝑧 2 subject to the
constraints 𝑥 + 2𝑦 + 3𝑧 = 6 𝑎𝑛𝑑 𝑥 + 3𝑦 + 9𝑧 = 9
79
UNIT – V
FUNCTION APPROXIMATION TOOLS IN ENGINEERING
Definitions:
Continuity at a point : A function f(x) is said to be continuous at x = a if lim+ 𝑓(𝑥) = lim− 𝑓(𝑥) = f(a).
𝑥→𝑎 𝑥→𝑎
Continuity in the interval: Afunction f(x) is said to be continuous in the interval [a,b] if f(x) is
continuous at every point c ϵ (a,b) i.e. lim 𝑓(𝑥) = f(c) and lim+ 𝑓(𝑥) = f(a) and lim− 𝑓(𝑥) = f(b).
𝑥→𝑐 𝑥→𝑎 𝑥→𝑏
Geometrically, if f(x) is continuous in [a,b] ,the graph of y = f(x) is a continuous curve for the points x in
[a,b].
𝑓(𝑥)−𝑓(𝑎) 𝑓(𝑥)−𝑓(𝑎)
Derivability at a point : A function f(x) is derivable at x = a if lim+ 𝑥−𝑎
= lim− 𝑥−𝑎
exists
𝑥→𝑎 𝑥→𝑎
and it is denoted by 𝑓 ′ (𝑎).
Derivability in the interval : Afunction f(x) is said to be derivable in the interval [a,b] if f(x) is
𝑓(𝑥)−𝑓(𝑐) 𝑓(𝑥)−𝑓(𝑎) 𝑓(𝑥)−𝑓(𝑏)
derivable at every point c ϵ (a,b) i.e. lim− exists and lim+ and lim− exists.
𝑥→𝑐 𝑥−𝑐 𝑥→𝑎 𝑥−𝑎 𝑥→𝑏 𝑥−𝑏
Geometrically, if f(x) is derivable in [a,b] then there exist a unique tangent to the curve at every point in
the interval.
Note:1. If 𝑓 ′ (𝑥 )> 0 then f(x) is an increasing function as x increases.
2. If 𝑓 ′ (𝑥)< 0 then f(x) is a decreasing function as x increases.
3. 𝑒 𝑥 , sin x , cos x are continuous and derivable everywhere.
4. log x is continuous and derivable in [1,∞].
5. Every polynomial function is continuous and derivable everywhere.
6. If f(x) and g(x) are continuous functions then f(x) + g(x) , f(x) – g(x) , f(x).g(x) are also
continuous
𝑓(𝑥)
And is continuous if 𝑔′ (𝑥 )≠0.
𝑔(𝑥)
80
Another form of Taylor’s Theorem:
If f : [a,a+h]→R is such that i)𝑓 (𝑛−1) is continuous in [a,a+h] ii) 𝑓 (𝑛−1)is derivable in (a,a+h) and pϵ 𝑍 +
then there exist a real number θ ϵ (0,1) such that
ℎ ℎ2 ℎ 𝑛−1 ℎ𝑛
f(a+h) = f(a) + 1! 𝑓 ′ (𝑎) + 𝑓 ′′ (𝑎) + .......+ (𝑛−1)! 𝑓 (𝑛−1) (𝑎) + 𝑅𝑛 ....(1) , where 𝑅𝑛 = 𝑓 (𝑛) (𝑎 + 𝜃ℎ)
2! 𝑛!
is called Lagrange’s form of remainder after n terms
Maclaurin’s Series: A Taylor’s series expansion of f(x) about a point x=0 is called Maclaurin’s Series
𝑥 𝑥2 𝑥 𝑛−1
of f(x) i.e. Maclaurin’s Series of f(x) is f(x) = f(0) + 1! 𝑓 ′ (0) + 𝑓 ′′ (0) + .......+ (𝑛−1)! 𝑓 (𝑛−1) (0) + ......
2!
2
𝑓 ′′′ (𝑥) = (1+𝑥)3⇒𝑓 ′′′ (0) = 2
6
𝑓 ′𝑣 (𝑥) = − (1+𝑥)4⇒𝑓 ′𝑣 (0) = -6 etc.
𝑥2 𝑥3 𝑥4
⇒ log (1+x) = x - + - + ........
2 3 4
𝟓
2) Write the Taylor’s series for f(x) = (𝟏 − 𝒙)𝟐 with Lagrange’s form of remainder upto
3 terms in the interval [0,1].
5
Sol: Given f(x) = (1 − 𝑥)2
It is clear that 𝑓 ′ (𝑥 ), 𝑓 ′′ (𝑥 ) are continuous and 𝑓 ′′′ (𝑥) is derivable in (0,1).Thus f(x)
satisfies the conditions of Taylor’s theorem.
∴ The Taylor’s series for f(x) in [0,x] is
𝑥 𝑥2 𝑥3
f(x) = f(0) + 𝑓 ′ (0) + 𝑓 ′′ (0) + 𝑓 ′′′ (𝑐 ) ,c ∈ (a,b) ...... (1)
1! 2! 3!
5
Now f(x) = (1 − 𝑥)2⇒f(0) = 1
81
3
5 5
𝑓 ′ (𝑥 ) = − 2 (1 − 𝑥)2 ⇒𝑓 ′ (0) = − 2
1
15 15
𝑓 ′′ (𝑥 ) = (1 − 𝑥)2 ⇒𝑓 ′′ (0) =
4 4
1 1
15 15
𝑓 ′′′ (𝑥) =− (1 − 𝑥)−2 ⇒𝑓 ′′′ (𝑐 ) = − (1 − 𝑐)−2
8 8
1
5𝑥 15𝑥 2 15 𝑥 3
∴from (1) ⇒ f(x) = 1 - + - (1 − 𝑐)−2 + R
2 8 48
82
2. Obtain Maclaurin’s series expansion of f(x) = sin(m𝒔𝒊𝒏−𝟏 𝒙) , where m is a constant.
⇒√1 − 𝑥 2 𝑦1 = m cos(m𝑠𝑖𝑛−1 𝑥)
⇒ (1 − 𝑥 2 )𝑦1 2 = 𝑚2 𝑐𝑜𝑠 2 (𝑚𝑠𝑖𝑛−1 𝑥)
⇒ (1 − 𝑥 2 )𝑦1 2 = 𝑚2 [1 − 𝑠𝑖𝑛2 (𝑚𝑠𝑖𝑛−1 𝑥)]
⇒ (1 − 𝑥 2 )𝑦1 2 = 𝑚2 [1 − 𝑦 2 ]
Differentiating w.r.t ‘x’ , we get
2 (1 - 𝑥 2 )𝑦1 𝑦2 - 2x 𝑦1 2 = -2𝑚2 𝑦𝑦1
⇒ (1 - 𝑥 2 )𝑦2 - x𝑦1 + 𝑚2 y = 0 , 𝑦1 ≠ 0 ............ (3)
Differentiating (3) w.r.t ‘x’ for ‘n’ times using Leibnitz rule , we get
𝑛(𝑛−1)
(1 - 𝑥 2 )𝑦𝑛+2 +n(-2x)𝑦𝑛+1 + (-2)𝑦𝑛 - (x𝑦𝑛+1 + 𝑛𝑦𝑛 ) + 𝑚2 𝑦𝑛 = 0
2!
83
3. Obtain the 4th degree Taylor’s polynomial approximation to f(x) = 𝒆𝟐𝒙 about x = 0.
Find the maximum error when 0 ≤ 𝒙 ≤ 0.5
4𝑥 3 2𝑥 4
= 1 + 2x + 2𝑥 2 + +
3 3
32 𝑒
⇒ | 𝑅5 (𝑥)| ≤ 120 [ max 𝑥 5 ] [ max 𝑒 2𝑐 ] ≤ 120
0≤𝑥≤0.5 0≤𝑥≤0.5
𝒙𝟐 𝒙𝟐
4. Prove that if x > 0 , x - < log (1+x)< x- 𝟐(𝟏+𝒙)
𝟐
𝑥2
Sol : Let f(x) = x - − log (1+x)
2
1 −𝑥 2
⇒ 𝑓 ′ (𝑥) = 1-x - 1+𝑥 = 1+𝑥
𝑥2
⇒x- < log (1+x) ............. (1)
2
𝑥2 −𝑥 2
Again let g(x) = log (1+x) - x + ⟹ 𝑔′ =
2(1+𝑥) 2(1+𝑥)2
84
Clearly 𝑔′ (𝑥) < 0 for x > 0
∴ g(x) is decreasing function for x > 0 i.e. g(x) <0
𝑥2
⇒ log (1+x) - x- 2(1+𝑥) <0
𝑥2
⇒ log (1+x)< x- 2(1+𝑥) ..................... (2)
𝟏 𝟏
5. Show that √𝒙 = 1 + 𝟐(x-1) - 𝟖 (𝒙 − 𝟏)𝟐 + ............ for 0 < 𝑥 < 2.
We have to prove that the expansion of √𝑥 is in powers of (x-1) i.e. It is aTaylor’s series of f(x) about x
=1
𝑥−1 (𝑥−1)2 (𝑥−1)3 (𝑥−1)4
It is given by f(x) = f(1) + 𝑓 ′ (1) + 𝑓 ′′ (1) + 𝑓 ′′′ (1) + 𝑓 ′𝑣 (1) ...............
1! 2! 3! 4!
(1)
1
Now f(x) = √𝑥 = 𝑥 2 ⇒ f(1) = 1
1
1 1
𝑓 ′ (𝑥 ) = 𝑥 − 2 ⇒ 𝑓 ′ (1) = 2
2
3
1 1
𝑓 ′′ (𝑥 ) = − 4 𝑥 −2 ⇒ 𝑓 ′′ (1) = − 4
5
3 3
𝑓 ′′′ (𝑥 ) = 𝑥 −2 ⇒ 𝑓 ′′′ (1) = 8
8
7
15 15
𝑓 ′𝑣 (𝑥 ) = − 𝑥 −2 𝑓 ′𝑣 (1) = − etc.
16 16
85
86
DIFFERENTIATION INTEGRATION
𝑑 𝑒 𝑎𝑥
1. (𝑒 𝑎𝑥 ) = 𝑎. 𝑒 𝑎𝑥 1. ∫ 𝑒 𝑎𝑥 𝑑𝑥 = +𝑐
𝑑𝑥 𝑎
𝑑 𝑥 𝑛+1
2. (𝑥 𝑛 ) = 𝑛𝑥 𝑛−1 2. ∫ 𝑥 𝑛 𝑑𝑥 = +𝑐
𝑑𝑥 𝑛+1
𝑑 𝑎𝑥
3. (𝑎 𝑥 ) = 𝑎 𝑥 log 𝑎 3. ∫ 𝑎 𝑥 𝑑𝑥 = +𝑐
𝑑𝑥 𝑙𝑜𝑔𝑎
𝑑 1 1
4. (𝑙𝑜𝑔𝑥 ) = 4. ∫ 𝑑𝑥 = 𝑙𝑜𝑔𝑥 + 𝑐
𝑑𝑥 𝑥 𝑥
𝑑
5. (𝑠𝑖𝑛𝑥 ) = 𝑐𝑜𝑠𝑥 5.∫ 𝑠𝑖𝑛𝑥 𝑑𝑥 = −𝑐𝑜𝑠𝑥 + 𝑐
𝑑𝑥
𝑑
6. (𝑐𝑜𝑠𝑥 ) = −𝑠𝑖𝑛𝑥 6.∫ 𝑐𝑜𝑠𝑥 𝑑𝑥 = 𝑠𝑖𝑛𝑥 + 𝑐
𝑑𝑥
𝑑
7. (𝑡𝑎𝑛𝑥 ) = 𝑠𝑒𝑐 2 𝑥 7.∫ 𝑡𝑎𝑛𝑥 𝑑𝑥 = log |𝑠𝑒𝑐𝑥| + 𝑐
𝑑𝑥
𝑑
8. (𝑐𝑜𝑡𝑥 ) = −𝑐𝑜𝑠𝑒𝑐 2 𝑥 8. ∫ 𝑐𝑜𝑡𝑥 𝑑𝑥 = log |𝑠𝑖𝑛𝑥| + 𝑐
𝑑𝑥
𝑑
9. (𝑠𝑒𝑐 𝑥 ) = 𝑆𝑒𝑐 𝑥 𝑇𝑎𝑛 𝑥 9.∫ 𝑠𝑒𝑐 𝑥 𝑑𝑥 = log | sec 𝑥 + tan 𝑥 | + 𝑐
𝑑𝑥
(or)
𝜋 𝑥
=log |tan( + )| + 𝑐
4 2
𝑑
10. (𝑐𝑜𝑠𝑒𝑐 𝑥 ) = −𝐶𝑜𝑠𝑒𝑐 𝑥 𝐶𝑜𝑡 𝑥 10.∫ 𝑐𝑜𝑠𝑒𝑐 𝑥 𝑑𝑥 = log |𝑐𝑜𝑠𝑒𝑐 𝑥 − cot 𝑥 | + 𝑐
𝑑𝑥
(or)
𝑥
=log |tan | + 𝑐
2
𝑑
11. (𝑠𝑖𝑛ℎ𝑥 ) = 𝑐𝑜𝑠ℎ𝑥 11.∫ 𝑠𝑖𝑛ℎ𝑥 𝑑𝑥 = 𝑐𝑜𝑠ℎ𝑥 + 𝑐
𝑑𝑥
𝑑
12. (𝑐𝑜𝑠ℎ𝑥 ) = 𝑠𝑖𝑛ℎ𝑥 12.∫ 𝑐𝑜𝑠ℎ𝑥 𝑑𝑥 = 𝑠𝑖𝑛ℎ𝑥 + 𝑐
𝑑𝑥
𝑑
13. (𝑡𝑎𝑛ℎ𝑥 ) = 𝑠𝑒𝑐 2 ℎ𝑥 13.∫ 𝑡𝑎𝑛ℎ𝑥 𝑑𝑥 = log |𝑐𝑜𝑠ℎ𝑥| + 𝑐
𝑑𝑥
𝑑
14. (𝑐𝑜𝑡 ℎ𝑥 ) = −𝑐𝑜𝑠𝑒𝑐 2 ℎ𝑥 14.∫ cot ℎ 𝑥 𝑑𝑥 = log |𝑠𝑖𝑛ℎ𝑥| + 𝑐
𝑑𝑥
𝑑
15. (𝑠𝑒𝑐 ℎ𝑥 ) = −𝑆𝑒𝑐 ℎ𝑥 𝑇𝑎𝑛 ℎ𝑥 15.∫ 𝑠𝑒𝑐 ℎ𝑥 𝑑𝑥 = 𝑡𝑎𝑛−1 (𝑠𝑖𝑛ℎ𝑥) + 𝑐
𝑑𝑥
𝑑 𝑥
16. (𝑐𝑜𝑠𝑒𝑐 ℎ𝑥 ) = −𝐶𝑜𝑠𝑒𝑐 ℎ𝑥 𝐶𝑜𝑡 ℎ𝑥 16.∫ 𝑐𝑜𝑠𝑒𝑐 ℎ𝑥 𝑑𝑥 = 𝑙𝑛|tan h( )| + 𝑐
𝑑𝑥 2
𝑑 −1 1 1 𝑥
17. (𝑠𝑖𝑛 𝑥) = √1−𝑥2 17.∫ √𝑎2 𝑑𝑥 = 𝑠𝑖𝑛−1 ( ) +
𝑑𝑥 −𝑥 2 𝑎
𝑥
𝑐 (𝑜𝑟)−𝑐𝑜𝑠 −1 ( ) + 𝑐
𝑎
𝑑 −1 −1 1 1 𝑥 −1 𝑥
18. (𝑐𝑜𝑠 𝑥) = 18.∫ 𝑑𝑥 = 𝑡𝑎𝑛 −1 ( ) + 𝑐 (or) 𝑐𝑜𝑡 −1 ( ) +
𝑑𝑥 √1−𝑥 2 𝑎 2 +𝑥 2 𝑎 𝑎 𝑎 𝑎
𝑐
87
𝑑 −1 1 𝑥
19. (𝑐𝑜𝑡 −1 𝑥) = 19.∫ 𝑑𝑥 = 𝑐𝑜𝑠ℎ−1 ( ) + 𝑐 (or)
𝑑𝑥 1+𝑥2 √𝑥 2 −𝑎 2 𝑎
log(𝑥 + √𝑥 2 − 𝑎2 ) +𝑐
𝑑 1 1 𝑥
20. (𝑡𝑎𝑛−1 𝑥) = 20.∫ 𝑑𝑥 = 𝑠𝑖𝑛ℎ−1 ( ) + 𝑐 (or)
𝑑𝑥 1+𝑥 2 √𝑥 2 +𝑎 2 𝑎
log(𝑥 + √𝑥 2 + 𝑎2 ) + 𝑐
88
𝑑 1 1 1 𝑥−𝑎 −1 𝑥
21. (𝑠𝑒𝑐 −1 𝑥) = 21.∫ 𝑑𝑥 = log | | + 𝑐 (or) 𝑐𝑜𝑡ℎ−1 ( ) +
𝑑𝑥 |𝑥|√𝑥 2 −1 𝑥 2 −𝑎 2 2𝑎 𝑥+𝑎 𝑎 𝑎
𝑐
𝑑 −1 1 1 𝑎+𝑥 1 𝑥
22. (𝑐𝑜𝑠𝑒𝑐 −1 𝑥) = 22.∫ 𝑑𝑥 = log | | + 𝑐 (or) 𝑇𝑎𝑛ℎ −1 ( ) +
𝑑𝑥 |𝑥|√𝑥 2 −1 𝑎 2 −𝑥 2 2𝑎 𝑎−𝑥 𝑎 𝑎
𝑐
𝑑
23. (𝑢𝑣 ) = 𝑢𝑣 | + 𝑣 | 𝑢 23.∫ 𝑢𝑣 𝑑𝑥 = 𝑢 ∫ 𝑣 𝑑𝑥 − ∫(𝑢| ∫ 𝑣 𝑑𝑥)𝑑𝑥
𝑑𝑥
𝑑 𝑢 𝑢 | 𝑣−𝑣 | 𝑢
24. ( )=
𝑑𝑥 𝑣 𝑣2
𝑎
2𝑎 2 ∫ 𝑓 (𝑥 )𝑑𝑥 𝑖𝑓 𝑓 (2𝑎 − 𝑥 ) = 𝑓 (𝑥 )
𝑓 (𝑥 )𝑑𝑥 = { 0
∫
0 0 𝑖𝑓 𝑓(2𝑎 − 𝑥 ) = −𝑓(𝑥 )
91
DIFFERENTIATION INTEGRATION
𝑑 𝑒 𝑎𝑥
1. (𝑒 𝑎𝑥 ) = 𝑎. 𝑒 𝑎𝑥 1. ∫ 𝑒 𝑎𝑥 𝑑𝑥 = +𝑐
𝑑𝑥 𝑎
𝑑 𝑥 𝑛+1
2. (𝑥 𝑛 ) = 𝑛𝑥 𝑛−1 2. ∫ 𝑥 𝑛 𝑑𝑥 = +𝑐
𝑑𝑥 𝑛+1
𝑑 𝑎𝑥
3. (𝑎 𝑥 ) = 𝑎 𝑥 log 𝑎 3. ∫ 𝑎 𝑥 𝑑𝑥 = +𝑐
𝑑𝑥 𝑙𝑜𝑔𝑎
𝑑 1 1
4. (𝑙𝑜𝑔𝑥 ) = 4. ∫ 𝑑𝑥 = 𝑙𝑜𝑔𝑥 + 𝑐
𝑑𝑥 𝑥 𝑥
𝑑
5. (𝑠𝑖𝑛𝑥 ) = 𝑐𝑜𝑠𝑥 5.∫ 𝑠𝑖𝑛𝑥 𝑑𝑥 = −𝑐𝑜𝑠𝑥 + 𝑐
𝑑𝑥
𝑑
6. (𝑐𝑜𝑠𝑥 ) = −𝑠𝑖𝑛𝑥 6.∫ 𝑐𝑜𝑠𝑥 𝑑𝑥 = 𝑠𝑖𝑛𝑥 + 𝑐
𝑑𝑥
𝑑
7. (𝑡𝑎𝑛𝑥 ) = 𝑠𝑒𝑐 2 𝑥 7.∫ 𝑡𝑎𝑛𝑥 𝑑𝑥 = log|𝑠𝑒𝑐𝑥| + 𝑐
𝑑𝑥
𝑑
8. (𝑐𝑜𝑡𝑥 ) = −𝑐𝑜𝑠𝑒𝑐 2 𝑥 8. ∫ 𝑐𝑜𝑡𝑥 𝑑𝑥 = log|𝑠𝑖𝑛𝑥| + 𝑐
𝑑𝑥
𝑑
9. (𝑠𝑒𝑐𝑥 ) = 𝑆𝑒𝑐𝑥𝑇𝑎𝑛𝑥 9.∫ 𝑠𝑒𝑐𝑥 𝑑𝑥 = log| sec 𝑥 + tan 𝑥 | +
𝑑𝑥
𝑐
(or)
𝜋 𝑥
=log|tan( + )| + 𝑐
4 2
𝑑
10. (𝑐𝑜𝑠𝑒𝑐𝑥 ) = −𝐶𝑜𝑠𝑒𝑐𝑥𝐶𝑜𝑡𝑥 10.∫ 𝑐𝑜𝑠𝑒𝑐𝑥 𝑑𝑥 = log|𝑐𝑜𝑠𝑒𝑐𝑥 −
𝑑𝑥
cot 𝑥 | + 𝑐
(or)
𝑥
=log|tan | + 𝑐
2
𝑑
11. (𝑠𝑖𝑛ℎ𝑥 ) = 𝑐𝑜𝑠ℎ𝑥 11.∫ 𝑠𝑖𝑛ℎ𝑥 𝑑𝑥 = 𝑐𝑜𝑠ℎ𝑥 + 𝑐
𝑑𝑥
𝑑
12. (𝑐𝑜𝑠ℎ𝑥 ) = 𝑠𝑖𝑛ℎ𝑥 12.∫ 𝑐𝑜𝑠ℎ𝑥 𝑑𝑥 = 𝑠𝑖𝑛ℎ𝑥 + 𝑐
𝑑𝑥
𝑑
13. (𝑡𝑎𝑛ℎ𝑥 ) = 𝑠𝑒𝑐 2 ℎ𝑥 13.∫ 𝑡𝑎𝑛ℎ𝑥 𝑑𝑥 = log|𝑐𝑜𝑠ℎ𝑥| + 𝑐
𝑑𝑥
𝑑
14. (𝑐𝑜𝑡ℎ𝑥 ) = −𝑐𝑜𝑠𝑒𝑐 2 ℎ𝑥 14.∫ cot ℎ 𝑥 𝑑𝑥 = log|𝑠𝑖𝑛ℎ𝑥| + 𝑐
𝑑𝑥
𝑑
15. (𝑠𝑒𝑐ℎ𝑥 ) = −𝑆𝑒𝑐ℎ𝑥𝑇𝑎𝑛ℎ𝑥 15.∫ 𝑠𝑒𝑐ℎ𝑥 𝑑𝑥 = 𝑡𝑎𝑛−1 (𝑠𝑖𝑛ℎ𝑥) + 𝑐
𝑑𝑥
𝑑 𝑥
16. (𝑐𝑜𝑠𝑒𝑐ℎ𝑥 ) = −𝐶𝑜𝑠𝑒𝑐ℎ𝑥𝐶𝑜𝑡ℎ𝑥 16.∫ 𝑐𝑜𝑠𝑒𝑐ℎ𝑥 𝑑𝑥 = 𝑙𝑛|tanh( )| + 𝑐
𝑑𝑥 2
𝑑 −1 1 1 𝑥
17. (𝑠𝑖𝑛 𝑥) = √1−𝑥2 17.∫ √𝑎2 𝑑𝑥 = 𝑠𝑖𝑛−1 ( ) +
𝑑𝑥 −𝑥 2 𝑎
𝑥
𝑐(𝑜𝑟)−𝑐𝑜𝑠 −1 ( ) + 𝑐
𝑎
𝑑 −1 −1 1 1 𝑥
18. (𝑐𝑜𝑠 𝑥) = √1−𝑥2 18.∫ 𝑑𝑥 = 𝑡𝑎𝑛 −1 ( ) + 𝑐 (or)
𝑑𝑥 𝑎 2 +𝑥 2 𝑎 𝑎
−1 𝑥
𝑐𝑜𝑡 −1 ( ) + 𝑐
𝑎 𝑎
𝑑 −1 1 𝑥
19. (𝑐𝑜𝑡 −1 𝑥) = 19.∫ 𝑑𝑥 = 𝑐𝑜𝑠ℎ−1 ( ) + 𝑐(or)
𝑑𝑥 1+𝑥2 √𝑥 2 −𝑎 2 𝑎
log(𝑥 + √𝑥 − 𝑎2 )
2 +𝑐
𝑑 1 1 𝑥
20. (𝑡𝑎𝑛−1 𝑥) = 20.∫ 𝑑𝑥 = 𝑠𝑖𝑛ℎ−1 ( ) + 𝑐(or)
𝑑𝑥 1+𝑥 2 √𝑥 2 +𝑎 2 𝑎
log(𝑥 + √𝑥 + 𝑎2 ) +
2 𝑐
1 1 1 𝑥−𝑎
21. (𝑠𝑒𝑐 −1 𝑥) = 21.∫ 𝑑𝑥 = log| | + 𝑐 (or)
|𝑥|√𝑥 2 −1 𝑥 2 −𝑎 2 2𝑎 𝑥+𝑎
−1 𝑥
𝑐𝑜𝑡ℎ−1 ( ) + 𝑐
𝑎 𝑎
𝑑 −1 −1 1 1 𝑎+𝑥
22. (𝑐𝑜𝑠𝑒𝑐 𝑥) = 22.∫ 𝑑𝑥 = log| |+𝑐
𝑑𝑥 |𝑥|√𝑥 2 −1 𝑎 2 −𝑥 2 2𝑎 𝑎−𝑥
1 𝑥
(or) 𝑇𝑎𝑛ℎ−1 ( ) + 𝑐
𝑎 𝑎
𝑑
23. (𝑢𝑣 ) = 𝑢𝑣 + 𝑣 | 𝑢 |
23.∫ 𝑢𝑣 𝑑𝑥 = 𝑢 ∫ 𝑣 𝑑𝑥 − ∫(𝑢| ∫ 𝑣𝑑𝑥)𝑑𝑥
𝑑𝑥
𝑑 𝑢 𝑢 | 𝑣−𝑣 | 𝑢
24. ( )=
𝑑𝑥 𝑣 𝑣2
𝑥3 𝑥5 𝑥7
𝑐𝑜𝑠𝑒𝑐 2 𝑥 − 𝑐𝑜𝑡 2 𝑥 = 1 𝑠𝑖𝑛𝑥 = 𝑥 − 3!
+ 5!
− 7!
+ ⋯ … ..
𝑥2 𝑥4 𝑥6
𝑐𝑜𝑠2𝑥 = 𝑐𝑜𝑠 2 𝑥 − 𝑠𝑖𝑛2 𝑥 𝑐𝑜𝑠𝑥 = 1 − 2!
+ 4!
− 6!
+ ⋯ … ..
𝑥2 𝑥3 𝑥4
= 2𝑐𝑜𝑠 2 𝑥 − 1 log(1 + 𝑥 ) = 𝑥 − + − … … ….
2 3 4
𝑥 𝑥2 𝑥3
= 1 − 2𝑠𝑖𝑛2 𝑥 log(1 − 𝑥 ) = −( + + +
1 2 3
⋯…….)
𝑠𝑖𝑛2𝑥 = 2𝑠𝑖𝑛𝑥𝑐𝑜𝑠𝑥
𝑠𝑖𝑛3𝑥 = 3𝑠𝑖𝑛𝑥 − 4𝑠𝑖𝑛3 𝑥 (1 + 𝑥)−1 = 1 − 𝑥 + 𝑥 2 − 𝑥 3 + 𝑥 4 +
⋯.
2𝑡𝑎𝑛𝑥
𝑡𝑎𝑛2𝑥 = 2
(1 − 𝑥)−1 = 1 + 𝑥 + 𝑥 2 + 𝑥 3 + 𝑥 4 +
1−𝑡𝑎𝑛 𝑥
⋯.
𝑡𝑎𝑛𝑥±𝑡𝑎𝑛𝑦
tan(𝑥 ± 𝑦) = (1 + 𝑥)−2 = 1 − 2𝑥 + 3𝑥 2 − 4𝑥 3 +
1∓𝑡𝑎𝑛𝑥𝑡𝑎𝑛𝑦
3
𝑐𝑜𝑠3𝑥 = 4𝑐𝑜𝑠 𝑥 − 3𝑐𝑜𝑠𝑥 (1 − 𝑥)−2 = 1 + 2𝑥 + 3𝑥 2 +
4𝑥 3 + ⋯.
= 0𝑖𝑓𝑓𝑖𝑠𝑜𝑑𝑑
𝑎 𝑓 (𝑥 )𝑑𝑥 { 𝑎
∫−𝑎 = 2 ∫0 𝑓 (𝑥 )𝑑𝑥𝑖𝑓𝑓𝑖𝑠𝑒𝑣𝑒𝑛
𝑎
2𝑎 2 ∫ 𝑓 (𝑥 )𝑑𝑥𝑖𝑓𝑓 (2𝑎 − 𝑥 ) = 𝑓 (𝑥 )
𝑓 (𝑥 )𝑑𝑥 = { 0
∫
0 0𝑖𝑓𝑓(2𝑎 − 𝑥 ) = −𝑓 (𝑥 )