0% found this document useful (0 votes)
34 views104 pages

Linear Algebra and Function Approximation

The document outlines a syllabus for a B.Tech course on Linear Algebra and Differential Calculus, covering topics such as vector and matrix algebra, eigenvalue problems, matrix decomposition, multivariable calculus, and function approximation. It includes detailed units on fundamental concepts, methods for solving systems of equations, and optimization techniques. Additionally, it lists recommended textbooks for further study in advanced engineering mathematics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views104 pages

Linear Algebra and Function Approximation

The document outlines a syllabus for a B.Tech course on Linear Algebra and Differential Calculus, covering topics such as vector and matrix algebra, eigenvalue problems, matrix decomposition, multivariable calculus, and function approximation. It includes detailed units on fundamental concepts, methods for solving systems of equations, and optimization techniques. Additionally, it lists recommended textbooks for further study in advanced engineering mathematics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 104

1

2
A COMPILATION OF PROBLEMS IN

LINEAR ALGEBRA AND


FUNCTION APPROXIMATION

EDITED BY
Faculty Of Mathematics

DEPARTMENT OF MATHEMATICS

3
SYLLABUS
B.Tech. I Year I Sem. LTPC3104
LINEAR ALGEBRA AND DIFFERENTIAL CALCULUS
COMMON TO CE, EEE, ME, ECE, CSE & IT

UNIT-1: Fundamentals of Vector and Matrix algebra


Operations on vectors and matrices- Orthogonal projection of vectors- Exact and generalized inverse of a
matrix- Rank of a matrix- Linear independence of vectors- Structured square matrices (Symmetric,
Hermitian, skew-symmetric, skew-Hermitian, orthogonal and unitary matrices)- Vector and matrix norms

Solution of a linear algebraic system of equations (homogeneous and non-homogeneous) using Gauss
elimination

UNIT-II: Matrix eigenvalue problem and Quadratic forms


Determination of eigenvalues and eigenvectors of a matrix, properties of eigenvalues and eigenvectors
(without proof)- Similarity of matrices- Diagonalization of a matrix- Orthogonal diagonalization of a
symmetric matrix- Definiteness of a symmetric matrix
4
Quadratic Forms- Definiteness and nature of a quadratic form- Reduction of a quadratic form to the
canonical form using an orthogonal transformation

UNIT-III: Matrix decomposition and Least squares solution of algebraic systems


LU decomposition- Cholesky decomposition- Gram-Schmidt orthonormalization process- QR
factorization- Eigen decomposition of a symmetric matrix- Singular value decomposition

Least squares solution of an over determined system of equations using QR factorization and the
generalized inverse- Estimation of the least squares error

UNIT-IV: Multivariable differential calculus and Function optimization

Partial Differentiation- Chain rule- Total differentiation- Jacobian- Functional dependence

Multivariable function Optimization- Taylor’s theorem for multivariable functions- Unconstrained


optimization of functions using the Hessian matrix- Constrained optimization using the Lagrange multiplier
method

UNIT-VL Function approximation tools in engineering

Function approximation using Taylor’s polynomials- Properties of Chebyshev polynomials- Uniform


approximation using Chebyshev polynomials

The principle of least squares- Function approximation using polynomial, exponential and power curves
using matrix notation- Estimating the Mean squared error

TEXT BOOKS:

Advanced Engineering Mathematics, 5th edition, R.K.Jain and S.R.K.Iyengar, Narosa publishing house
Higher Engineering Mathematics- B.S.Grewal- Khanna publications

5
Fundamentals Of Vector and Matrix Algebra
UNIT - 1
Introduction: Let V be a non-empty set of certain objects, which may be vectors, matrices, functions or
some other objects. Each object is an element of V and is called a vector. The elements of V are denoted
by a, b, c, u, v, etc.
Example: 𝒂 = (𝑎1 , 𝑎2 , … 𝑎𝑛 ) 𝒃 = (𝑏1 , 𝑏2 , … 𝑏𝑛 ) where 𝒂, 𝒃 ∈ 𝑽
Assume that the two algebraic operations
1. Vector addition
𝒂 + 𝒃 = (𝑎1 , 𝑎2 , … 𝑎𝑛 ) + (𝑏1 , 𝑏2 , … 𝑏𝑛 ) = (𝑎1 + 𝑏1 , 𝑎2 + 𝑏2 , … 𝑎𝑛 + 𝑏𝑛 )
2. Scalar multiplication
𝛼𝒂 = 𝛼 (𝑎1 , 𝑎2 , … 𝑎𝑛 ) = (𝛼𝑎1 , 𝛼𝑎2 , … 𝛼𝑎𝑛 ) for any scalar 𝛼 are defined on elements of V.
Vector Space: A set V defines a vector space if for any elements 𝒂, 𝒃, 𝒄 in V and any scalars, 𝛼, 𝛽 the
following properties are satisfied.
Properties with respect to vector addition:
i. 𝒂 + 𝒃 in V
ii. 𝒂+𝒃 =𝒃+𝒂 (Commutative law)
iii. (𝒂 + 𝒃) + 𝒄 = 𝒂 + (𝒃 + 𝒄) (Associative law)
iv. 𝒂+𝟎 =𝟎+𝒂 (Existence of a unique zero element in V)
v. ( )
𝒂 + −𝒂 = 𝟎 (Existence of additive inverse or negative vector in V)
Properties with respect to scalar multiplication:
vi. 𝛼𝒂 is in V
vii. (𝛼 + 𝛽 )𝒂 = 𝛼𝒂 + 𝛽𝒂 (Left distributive law)
viii. (𝛼𝛽 )𝒂 = 𝛼(𝛽𝒂)
ix. 𝛼(𝒂 + 𝒃) = 𝛼𝒂 + 𝛼𝒃 (Right distributive law)
x. 1. 𝐚 = 𝐚 (Existence of multiplicative identity)
If the elements of V are real, then it is called a real vector space when the scalars 𝛼, 𝛽 are real numbers,
whereas V is called a complex vector space, if the elements of V are complex and the scalars 𝛼, 𝛽 are may
real or complex numbers or if the elements of V are real and the scalars 𝛼, 𝛽 are complex numbers.
Examples of a vector space:
1. The set V of real or complex numbers.
2. The set of real valued continuous functions f on any closed interval [𝑎, 𝑏]. The 0 vector defined in
property (iv) is the zero function.
3. The set of polynomials 𝑃𝑛 of degree less than or equal to n
4. The set V of n-tuples in 𝑅𝑛 𝑜𝑟 𝐶 𝑛
5. The set V of all 𝑚 × 𝑛 matrices. The element 0 defined in property 4 is the null matrix of order 𝑚 × 𝑛

6
Example 1: Let V be the set of all ordered pairs (𝑥, 𝑦), where 𝑥, 𝑦 are real numbers.
Let 𝑎̅ = (𝑥1 , 𝑦1 ) 𝑎𝑛𝑑 𝑏̅ = (𝑥2 , 𝑦2 ) be elements in V. Define the addition as
𝑎̅ + 𝑏̅ = (𝑥1 , 𝑦1 ) + (𝑥2 , 𝑦2 )
=(𝑥1 𝑥2 , 𝑦1 𝑦2 )
And the scalar multiplication is not a vector space.
Solution: Note that (1,1) is an element of V. From the given definition of vector addition, we find that
(𝑥1 , 𝑦1 ) + (1,1) = (𝑥1 , 𝑦1 ). This is true for the element (1,1). Therefore the element (1,1) plays the role
of ‘0’ element as defined in (iv) property.
1 1 1 1
Now there exist an element (𝑥 , 𝑦 ) such that (𝑥1 , 𝑦1 ) + (𝑥 , 𝑦 ) = (1,1)
1 1 1 1

1 1
The element (𝑥 , 𝑦 ) plays the role of additive inverse.
1 1

Now let 𝛼 = 1, 𝛽 = 2, be any two scalars. We have (𝛼 + 𝛽 )(𝑥1 , 𝑦1 ) = 3(𝑥1 , 𝑦1 ) = (3𝑥1 , 3𝑦1 )
And 𝛼(𝑥1 , 𝑦1 ) + 𝛽 (𝑥1 , 𝑦1 ) = 1(𝑥1 , 𝑦1 ) + 2(𝑥1 , 𝑦1 ) = (2𝑥12 , 2𝑦12 )
Therefore (𝛼 + 𝛽 )(𝑥1 , 𝑦1 ) ≠ 𝛼 (𝑥1 , 𝑦1 ) + 𝛽 (𝑥1 , 𝑦1 ) and property (vii) not satisfied and also property (ix)
not satisfied. Hence V is not a vector space.

Linearly independent of vectors: Let V be a vector space. A finite set {𝑉̅1 , 𝑉̅2 , 𝑉̅3 … . 𝑉̅𝑛 } of the elements
of V is said to be Linearly dependent if ∃ scalars 𝛼1 , 𝛼2 , 𝛼3 , … . 𝛼𝑛 not all Zeros such that
𝛼1 𝑉̅1 + 𝛼2 𝑉̅2 + 𝛼3 𝑉̅3 + ⋯ + 𝛼𝑛 𝑉̅𝑛 = 0
If the above equation is satisfied only for 𝛼1 = 𝛼2 = 𝛼3 = ⋯ = 𝛼𝑛 = 0. Then the set of vectors is said to
be linearly independent.
Note:1. the set of vectors {𝑉̅1 , 𝑉̅2 , 𝑉̅3 … . 𝑉̅𝑛 } is linearly dependent if and only if at least one element of the
set is a linear combination of the remaining elements.
2. 𝛼1 𝑉̅1 + 𝛼2 𝑉̅2 + 𝛼3 𝑉̅3 + ⋯ + 𝛼𝑛 𝑉̅𝑛 = 0. Gives a homogeneous system of algebraic equations. If
det(coefficient matrix) = 0, then the vectors 𝑉̅1 , 𝑉̅2 , 𝑉̅3 … . 𝑉̅𝑛 are linearly dependent. Otherwise if
det(coefficient matrix) ≠ 0, then the vectors 𝑉̅1 , 𝑉̅2 , 𝑉̅3 … . 𝑉̅𝑛 are linearly independent. 𝛼1 = 𝛼2 = 𝛼3 =
⋯ = 𝛼𝑛 = 0.
1 0 0
Example 2: let 𝑉̅1 = (−1) , 𝑉̅2 = (−1) 𝑎𝑛𝑑 𝑉̅3 = (0) be elements of ℝ3 . Show that the set of vectors
0 −1 1
{𝑉̅1 , 𝑉̅2 , 𝑉̅3 } is linearly independent.

Solution: We consider the vector equation 𝛼1 𝑉̅1 + 𝛼2 𝑉̅2 + 𝛼3 𝑉̅3 = 0̅.

7
1 0 0
Substituting 𝑉̅1 , 𝑉̅2 , 𝑉̅3 , we obtain 𝛼1 (−1) + 𝛼2 (−1) + 𝛼3 (0) = 0̅
0 −1 1
⟹ 𝛼1 = 0, −𝛼1 + 𝛼2 = 0, −𝛼2 + 𝛼3 = 0
The solution of these equations 𝛼1 = 𝛼2 = 𝛼3 = 0.
Therefore the given set of vectors are linearly independent.
OR
1 0 0
Det(𝑉̅1 , 𝑉̅2 , 𝑉̅3 ) = |−1 1 0| = 1 ≠ 0.
0 −1 1
Therefore the given set of vectors are linearly independent.
1 0 0 1
Example 3: Let 𝑉̅1 = (−1) , 𝑉̅2 = ( 1 ) 𝑎𝑛𝑑 𝑉̅3 = (2) , 𝑉̅4 = (0) be elements of ℝ3 . Show that the set
0 −1 1 3
of vectors {𝑉̅1 , 𝑉̅2 , 𝑉̅3 , 𝑉̅4 } is linearly dependent.
Solution: The given set of elements will be Linearly dependent if ∃ scalars 𝛼1 , 𝛼2 , 𝛼3 , 𝛼4 not all Zeros
such that
𝛼1 𝑉̅1 + 𝛼2 𝑉̅2 + 𝛼3 𝑉̅3 + 𝛼4 𝑉̅4 = 0̅ …..(1)
Substituting for 𝑉̅1 , 𝑉̅2 , 𝑉̅3 , 𝑎𝑛𝑑 𝑉̅4 , we get
𝛼1 + 𝛼4 = 0,
−𝛼1 + 𝛼2 + 2𝛼3 = 0,
−𝛼2 + 𝛼3 + 3𝛼4 = 0
The solution of system of equations is
𝛼1 = −𝛼4
5𝛼4
𝛼2 =
3
−4𝛼4
𝛼3 = 𝑎𝑛𝑑 𝛼4 is arbitrary.
3
5𝛼 4𝛼
From (1), we obtain −𝛼4 𝑉̅1 + 3 4 𝑉̅2 − 3 4 𝑉̅3 + 𝛼4 𝑉̅4 = 0̅
5 4
Then −𝑉̅1 + 3 𝑉̅2 − 3 𝑉̅3 + 𝑉̅4 = 0̅

Hence ∃ scalars not all zeros such that equation (1) satisfied.
Therefore the given set of vectors are linearly dependent.

8
Orthogonal Vectors:
The vectors 𝑉̅1 𝑎𝑛𝑑 𝑉̅2 are said to be orthogonal vectors if 𝑉̅1 . 𝑉̅2 = 0, (𝑉̅1𝑇 . 𝑉̅2 = 0)
1 −1
̅ ̅
Example 4: Let 𝑉1 = (1) 𝑎𝑛𝑑 𝑉2 = (−1) are orthogonal vectors since
2 1
−1
𝑉̅1 . 𝑉̅2 = 𝑉̅1𝑇 . 𝑉̅2 = (1 1 )
2 −1) = 0.
(
1
3𝑖 −4𝑖 0
Example 5: Let 𝑉̅1 = (4𝑖 ) , 𝑉̅2 = ( 3𝑖 ) 𝑎𝑛𝑑 𝑉̅3 = ( 0 ) are orthogonal vectors since
0 0 1+𝑖
𝑉̅2𝑇 . 𝑉̅3 = 𝑉̅3𝑇 . 𝑉̅1 = 𝑉̅1𝑇 . 𝑉̅2 = 0.
Orthonormal vectors:
The vectrors 𝑉̅1 𝑎𝑛𝑑 𝑉̅2 for which 𝑉̅1 . 𝑉̅2 = 0 𝑎𝑛𝑑 ‖𝑉̅1 ‖ = 1, ‖𝑉̅2 ‖ = 1 are called orthonormal vectors.
̅
𝑉 ̅
𝑉
Note: If 𝑉̅1 𝑎𝑛𝑑 𝑉̅2 are any vectors such that 𝑉̅1 . 𝑉̅2 = 0 then ‖𝑉̅1 ‖ , ‖𝑉̅2 ‖ are orthonormal vectors.
1 2

1 0 0
Example 6: (0) , (1) 𝑎𝑛𝑑 (0) are orthogonal vectors.
0 0 1
3𝑖/5 −4𝑖/5 0
0 ) are orthonormal vectors.
Example 7: (4𝑖/5) , ( 3𝑖/5 ) 𝑎𝑛𝑑 ((1+𝑖)
0 0 √2

Note: A real matrix A is orthogonal if 𝐴−1 = 𝐴𝑇 .


𝑐𝑜𝑠𝜃 −𝑠𝑖𝑛𝜃
Example 8: A=[ ] is orthogonal matrix.
𝑠𝑖𝑛𝜃 𝑐𝑜𝑠𝜃
1 1 1
1
1 1
Example 9: Show that the vectors 3 (1) , 2 (−1) 𝑎𝑛𝑑 ( 1 ) are orthonormal vectros.
√ √ √6
1 0 −2
1 1 1
1 1 1
Solution: Let 𝑉̅1 = (1) , 𝑉2 = (−1) , 𝑉̅3 =
̅ (1)
√3 √2 √6
1 0 −2
Since 𝑉̅1 . 𝑉̅2 = 𝑉̅2 . 𝑉̅3 = 𝑉̅1 . 𝑉̅3 = 0 also ‖𝑉̅1 ‖ = 1 = ‖𝑉̅2 ‖ = ‖𝑉̅3 ‖.

Projection of Vectors:
̅ 𝑎𝑛𝑑 𝑉,
Given two vectors 𝑈 ̅ we can ask how far we will go in the direction of 𝑉̅ when we travel along 𝑈
̅.

9
The distance we travel in the direction of 𝑉̅ , while traversing 𝑈
̅ is called the component of 𝑢̅ with resprct
̅
to 𝑣̅ and is denoted 𝑐𝑜𝑚𝑝𝑣 𝑈.
̅, in the direction of 𝑉̅ is called the projection of 𝑈
The vector parallel to 𝑣̅ , with magnitude 𝑐𝑜𝑚𝑝𝑣 𝑈 ̅ into 𝑣̅
and is denoted 𝑃𝑟𝑜𝑗𝑣 𝑈̅.
̅ = ‖𝑃𝑟𝑜𝑗𝑣 𝑈
So, 𝑐𝑜𝑚𝑝𝑣 𝑈 ̅‖
̅ is a vector 𝑐𝑜𝑚𝑝𝑣 𝑈
Note 𝑃𝑟𝑜𝑗𝑣 𝑈 ̅ is a scalar.

̅ = ‖𝑈‖
From the picture 𝑐𝑜𝑚𝑝𝑣 𝑈
̅ into 𝑉̅ .
We wish to compute to find a formula for the projection of 𝑈
̅. 𝑉̅ = ‖𝑈‖. ‖𝑉 ‖𝑐𝑜𝑠𝜃
Consider 𝑈
̅.𝑉
𝑈 ̅
Thus ‖𝑉‖ = ‖𝑈‖ 𝑐𝑜𝑠𝜃

̅ .𝑉
𝑈 ̅
̅=
So 𝑐𝑜𝑚𝑝𝑣 𝑈 ‖𝑉‖

̅
𝑉 ̅.𝑉
𝑈 ̅
The unit vector in the same direction as 𝑉̅ is given by ‖𝑉‖. So 𝑃𝑟𝑜𝑗𝑣 𝑈
̅ = ( 2) V
‖𝑉‖
̅.

Example 10:
a. Find the projection of u = i + 2j onto v= I + j .
2
u.v = 1+2 = 3 , ‖𝑉 ‖2 = (√2) = 2
𝑢.𝑣 3 3𝑖 3𝑗
𝑝𝑟𝑜𝑗𝑣𝑢 = (‖𝑉‖2 )v = 2 (𝑖 + 𝑗) = +
2 2

b. Find 𝒑𝒓𝒐𝒋𝒖𝒗 , where u = (1,2,1) and v = (1,1,2)


2
u.v = 1+2+2 = 5 , ‖𝑉 ‖2 = (√12 + 12 + 22 ) = 6
5
so 𝑝𝑟𝑜𝑗𝑣𝑢 = (1,1,2)
6

c. Find the component of u = i + j in the direction of v = 3i + 4j

u.v = 3 + 4 = 7, ‖𝑉 ‖ = √32 + 42 = √25 = 5

10
𝑢.𝑣 7
𝑐𝑜𝑚𝑝𝑣𝑢 = ‖𝑉‖ = 5

d. Find the components of u = i+3j-2k in the directions i , j and k.


u.i = 1 , u.j = 3 , u.k = -2
‖𝑖 ‖ = ‖𝑗‖ = ‖𝑘‖ = 1
So 𝑐𝑜𝑚𝑝𝑖 𝑢 = 1 , 𝑐𝑜𝑚𝑝𝑗 𝑢 = 3, 𝑐𝑜𝑚𝑝𝑘 𝑢 = −2

So the use of the term components justified in this context.


Indeed coordinate axes are arbitrarily chosen and are subject to change.
If u is a new coordinate vector given in terms of the old set then 𝑐𝑜𝑚𝑝𝑢 𝑤 gives the component of the vector
w in the new coordinate system.

Example 11:
1 1
If coordinates in the plane are rotated by 450 , the vector i is mapped to 𝑢= 𝑖+ 𝑗,
√2 √2
−1 1
and the vector 𝑗 is mapped to 𝑣 = 𝑖+ 𝑗 . Find the components of 𝑤 = 2𝑖 − 5𝑗 with respect to the
√2 √2
new coordinate vectors 𝑢 𝑎𝑛𝑑 𝑣. i.e. Express 𝑤 in terms of 𝑢 and 𝑣.
Solution:

−3 −7
𝑤. 𝑢 = , 𝑤. 𝑣 = . ‖𝑢‖ = ‖𝑣‖ = 1
√2 √2

So
−3 −7
𝐶𝑜𝑚𝑝𝑢 𝑊 = , 𝐶𝑜𝑚𝑝𝑣 𝑊 =
√2 √2
−3 −7
and 𝑤= 𝑢+ 𝑣
√2 √2

11
Symmetric, Skew symmetric and orthogonal matrices

Let A = [𝑎𝑖𝑗 ] is said to be real matrix if every element of A is real. A real square matrix A=
[𝑎𝑖𝑗 ] is said to be

a) Symmetric: if 𝐴𝑇 = 𝐴 𝑖. 𝑒 𝑎𝑗𝑖 = 𝑎𝑖𝑗

b) Skew symmetric: if 𝐴𝑇 = −𝐴 𝑖. 𝑒 𝑎𝑗𝑖 = −𝑎𝑖𝑗

c) Orthogonal: if 𝐴𝑇 = 𝐴−1 or 𝐴𝑇 𝐴 = 𝐼
Note: If A is orthogonal then |𝐴| = ±1

Example 12.
Examine the following

matrices are symmetric or not


1 2 −3 1 2 −3
(i) 𝐴= [ 2 5 −1] (ii) 𝐵 = [−5 5 0]
−3 −1 7 −3 0 6
Solution: The matrix A is symmetric but B is not a symmetric matrix since 𝐴𝑇 = 𝐴 but 𝐵𝑇 ≠
𝐵
Example 13.
Examine the following matrices are skew symmetric or not
1 4 0 2 −1 −1
(i) 𝐴 = [−4 6 −2] (ii) 𝐵 = [−1 3 −9]
0 2 −3 1 9 4
Solution: The matrix A is skew symmetric but the matrix B is not skew symmetric.
1 −2 2
3 3 3
2 2 1
Example 14: Examine the matrix 𝐴= 3 3 3
is orthogonal or not?
−2 1 2
[3 3 3]

Solution: Since 𝐴𝑇 𝐴 = 𝐼 the matrix A is orthogonal.

Complex matrix: A matrix 𝐴 = [𝑎𝑖𝑗 ] is said to be a complex matrix if there exists at least one
element 𝑎𝑖𝑗 of A is complex.

Complex conjugate: Let 𝐴 = [𝑎𝑖𝑗 ] be a complex matrix . The complex conjugate of A is denoted by
𝐴 and is obtained by replacing each 𝑎𝑖𝑗 of A by it’s complex conjugates.

12
Hermitian, skew hermitian and unitary matrices
A complex square matrix A is said to be

a) Hermitian: if 𝐴𝜃 = 𝐴 where 𝜃 denotes transposed conjugate.


b) Skewhermitian: if 𝐴𝜃 = −𝐴
c) Unitary: if 𝐴𝜃 𝐴 = 𝐴𝐴𝜃
1 𝑖 −𝑖 + 2
Example 15: [
A= −𝑖 2 2 + 3𝑖 ] is hermitian.
𝑖+2 2 − 3𝑖 3

1 −𝑖 𝑖+2
Since 𝐴𝑇 = [ 𝑖 2 2 − 3𝑖 ]
−𝑖 + 2 2 + 3𝑖 3

1 𝑖 −𝑖 + 2
̅̅̅̅
(𝐴 𝑇 ) = [ −𝑖 2 2 + 3𝑖 ] = 𝐴
𝑖+2 2 − 3𝑖 3

−𝑖 1 + 2𝑖 3𝑖
Example 16: A= [−1 + 2𝑖 0 4 + 𝑖] is skewhermitian.
3𝑖 −4 + 𝑖 2𝑖

−𝑖 −1 + 2𝑖 3𝑖
𝑇
Since 𝐴 = [1 + 2𝑖 0 −4 + 𝑖 ]
3𝑖 4+𝑖 2𝑖

𝑖 −1 − 2𝑖 −3𝑖
̅̅̅̅
(𝐴 𝑇 ) = [1 − 2𝑖 0 −4 − 𝑖 ] = −𝐴
−3𝑖 4−𝑖 −2𝑖

𝑖 0 0
Example 17: If 𝐴 = [0 0 𝑖] then show that A is unitary and also skew hermitian.
0 𝑖 0

𝑖 0 0 −𝑖 0 0
Solution: 𝐴𝑇 = [0 0 𝑖] , ̅̅̅̅
(𝐴 𝑇) = [ 0 0 −𝑖 ] = −𝐴
0 𝑖 0 0 −𝑖 0
Thus, A is skewhermitian.
13
𝑖 0 0 −𝑖 0 0 1 0 0
̅̅̅̅
𝑇
Consider ( A)(𝐴 ) = [0 0 𝑖] [ 0 0 −𝑖 ] = [0 1 0] = 𝐼
0 𝑖 0 0 −𝑖 0 0 0 1
Thus, A is unitary.
Example 18:

1 3 1 2
 0 11 5 3 
Re duce the matrix Ato echelon form where A   hence find its rank ?
 2 5 3 1
 
4 1 1 5

Sol :
1 3 1 2
 0 11 5 3 
Given that A   , R3  (2) R1  R3 , R4  (4) R1  R4
 2 5 3 1
 
4 1 1 5

1 3 1 2 
0 11 5 3 
 
0 11 5 3
 
0 11 5 3 R3  R2  R3
R4  R2  R4
1 3 1
2
0 11 3 
5

0 0 00
 
0 0 00
This is echelon form of the matrix A .
S in ce the no. of ( linearly independent ) non  zero rows is 2
The rank  A  2

14
1 1 1 0
4 4 3 1 
Example 19: Deter min ethe value of ' b ' such that the rank of Ais 3 then where A  
b 2 2 2
 
9 9 b 3
1 1 1 0
4 4 3 1 
Given that A   , R2  (4) R1  R2 , R3  (2) R1  R3 , R4  (9) R1  R4
b 2 2 2
Sol :  
9 9 b 3
 1 1 1 0
 0 0 1 1 
 R3  (4) R2  R3 , R4  (3) R2  R4
b  2 0 4 2
 
 0 0 b9 3
 1 1 1 0
 0 0 1 1 

b  2 0 0 2 
 
 0 0 b6 0 
R3  R4
 1 1 1 0
 0 0 1 1 

 0 0 b6 0 
 
b  2 0 0 2 
Case 1: if b  2 then | A | 1.0.8.(2)  0 thenthe rank of A  3
Case 2 : if b  6 thenthe non  zero rows is 3, hence rank  A   3

Example 20: Solve x  y  2 z  2; 3x  y  z  6; x  3 y  4 z  4

15
Sol :
Given system of equations are
x  y  2 z  2;
3 x  y  z  6;
x  3 y  4 z  4

1 1 2 2
then augmented matrix [ A / B]  3 1 1 6 
1 3 4 4 
R2  (3) R1  R2
R3  (1) R1  R3

1 1 2 2 
0 2 7 12  R  (2) R  R
  3 1 3

0 2 2 2 

1 1 2 2
0 2 7 12 
 
0 0 2 2 
Hence the rank ( A)  3  rank ( A / B ).
Thus the system has unique solution .
From backward substitution
2z  2  z  1
2 y  7 z  12  y  5 / 2
x  y  2 z  2  x  5 / 2
Example 21:
Solve
x  2 y  3z  0
3x  4 y  4 z  0
7 x  10 y  12 z  0

Sol :

Given system of equations is


x  2 y  3z  0
3x  4 y  4 z  0
7 x  10 y  12 z  0

16
which can be exp ressed as AX  0
 1 2 3
The augmented matrix [ A]   3 4 4  R2  (3) R1  R2 , R3  (7) R1  R3
 7 10 12 
 1 2 3
 0 2 5 R  (2) R  R
  3 2 3

 0 4 9

 1 2 3
 0 2 5
 
 0 0 1 

Hencethe rank ( A)  3.
Thus the system has trivial solution
From backward substitution
 x   0
   
 y    0
 z   0
   

Long Answer Questions:


1. Solve 𝟐𝒙 − 𝟐𝒚 + 𝟒𝒛 + 𝟑𝒘 = 𝟗; 𝒙 − 𝒚 + 𝟐𝒛 + 𝟐𝒘 = 𝟔; 𝟐𝒙 − 𝟐𝒚 + 𝒛 + 𝟐𝒘 = 𝟑 &𝒂𝒏𝒅
𝒙−𝒚+𝒘=𝟐

Sol :
The given system of non-homogeneous linear equations is in the form of AX=B
2𝑥 − 2𝑦 + 4𝑧 + 3𝑤 = 9;
𝑥 − 𝑦 + 2𝑧 + 2𝑤 = 6;
2𝑥 − 2𝑦 + 𝑧 + 2𝑤 = 3
𝑥−𝑦+𝑤 = 2
Let us take the augmented matrix of above equations, we get
 2 2 4 3 9 
1 1 2 2 6 
[ A / B]   
 2 2 1 2 3 
 
1 1 0 1 2 

17
Apply elementary operations on [A/B] and reduce into echelon form
R1  R2
1 1 2 2 6
2 2 4 3 9 
[ A / B]   R2  (2) R1  R2 , R3  (2) R1  R3 , R4  (1) R1  R4
2 2 1 2 3
 
1 1 0 1 2

1 1 2 2 6
0 0 0 1 3
[ A / B]  
0 0 3 2 9 
 
0 0 2 1 4 
R2  R4

1 1 2 2 6
0 0 2 1 4 
[ A / B]  
0 0 3 2 9 
 
0 0 0 1 3
R3  (3 / 2) R2  R3
1 1 2 2 6
0 0 2 1 4 
[ A / B]  
0 0 0 1/ 2 3
 
0 0 0 1 3
R 4  (2) R3  R4
1 1 2 2 6
0 0 2 1 4 
[ A / B]  
0 0 0 1/ 2 3
 
0 0 0 0 3
Rank ( A)  3  4  Rank[ A / B]
Clearly the given system is inconsistent and therefore system has no solution

2. Determine the values of a and b for which system x  2 y  3z  6 ; x  3 y  5 z  9


2 x  5 y  az  b has (1) no solution, (2) unique solution (3) infinite number of solutions

18
Sol :
Given x  2 y  3 z  6
x  3 y  5z  9
2 x  5 y  az  b
Let us consider the augmented matrix of given equations
1 2 3 6 
[ A / B]  1 3 5 9 
 2 5 a b 
R2  (1) R1  R2
R3  (2) R1  R3
1 2 3 6 

 0 1 2 3 
0 1 a  6 b  12 
R3  R3  R2

1 2 3 6 

 0 1 2 3 
0 0 a  8 b  15
Case (1): If a=8 and b≠15, then
rank(A)=2 ≠3=rank[A/B]
In this case, the above system AX=B is said to be inconsistent. and it has no solution.
Case (2): if a≠8 and b is any value, then
rank(A)=3=rank[A/B]
In this case, the above system AX=B is said to be consistent. and it has unique solution.
b  15
(a  8) z  b  15  z 
a 8
3a  2b  6
y  2z  3  y  3  2z  y 
a 8
x  2 y  3z  6  x  6  2 y  3z
3a  2b  6 b  15
x  6  2( )  3(b  15 / a  8)  x 
a 8 a 8
Case (3): If a=8 and b=15, then
rank(A)=2=rank[A/B]
in this case, the above system AX=B is said to be consistent. and it has infinite
number of solutions
n  r  3 2 1
Then
x  2 y  3z  6

19
y  2z  3
Let z  k (arbitary var iable)
 k 
 
Hence the solution is  3  2k 
 k 
 
3.
Solve the following equations
x  y  2 z  3w  0
x  2y  z  w  0
4 x  y  5 z  8w  0
5x  7 y  2 z  w  0

Sol :
The given system of hom ogeneous linear equations can be exp ressed as AX  0.
x  y  2 z  3w  0
x  2y  z  w  0
4 x  y  5 z  8w  0
5x  7 y  2 z  w  0
1 1 2 3 
1 2 1 1
where the coefficient matrix A  
 4 1 5 8 
 
 5 7 2 1
R2  (1) R2
R3  R2  R3
R4  (2) R2  R4
1 1 2 3 
0 3 3 4 
 
0 3 3 4 
 
0 12 12 16 
R2  (1) R1  R2
R3  (4) R1  R3
R4  (5) R1  R4
1 1 2 3 
0 3 3 4 

0 0 0 0
 
0 0 0 0

20
Rank ( A)  2  4  no.of var iables
Thus the given system has non  trivial solution.
then equations are
x  y  2 z  3w  0
3 y  3z  4w  0
Choose z  k2 and w  k1
1 4
Then y  (3z  4w)  k2  k1
3 3
4 5
x  2 z  3w  y  2k2  3k1  k2  k1  x  k2  k1
3 3
 5 
 k2  k1 
 x 3
   
y 
   k2  k1 4
z  3 
   k 
 w  2 
 k 
 1 
1 1 
4.  
.Use Gram Schmidth method to makethe vectors a  1 and b  0  orthogonal.
1  2

Sol :
1 1 
Given a  1 , b   0 
1  2
Aa
AT b
B b .A
AT A
1
A  a  1
1
AT  1 1 1
AT b  3
AT A  3
1  1 1  1  0 
  3       
B  0   1  0   1   1
3
 2 1  2 1  1 
A and B are orthogonal

21
1 1/ 3 
1   
q1  1   1/ 3 
3  
1 1/ 3 
 
 0 
 
q2   1/ 2 
 1/ 2 
 
1/ 3 0 
 
the orthogonal matrix is Q  1/ 3 1/ 2 
 
1/ 3 1/ 2 

Exercise
1 2 1 2
1. Find the value of ‘k’ such that the matrix 𝐴 = [2 1 2 1] is of rank 2?
7 8 𝑘 8
2 3 1 −2
2. Find the rank of the matrix 𝐴 = [1 2 0 2]
1 4 −2 14
1 2 𝑥 5
3. the least squares approximate solution of the over determined system [2 1 ] [𝑦 ] = [ 4 ]
1 −1 𝑧 −1

4. Determine the parameter 𝜆 such that the linear homogeneous system


3 x + 10 y + 5 z = 𝜆𝑥
-2 x - 3 y - 4 z = 𝜆𝑦
3 x + 5 y + 7 z = 𝜆𝑧
has non - trivial solutions. Hence solve the system for the largest real value of 𝜆
5. Let V be the set of all ordered pairs (𝑥, 𝑦) where 𝑥, 𝑦 are real numbers. Let 𝑎 = (𝑥1 , 𝑦1 ) and
𝑏 = (𝑥2 , 𝑦2 ) be two elements in V . Define the addition by 𝑎 + 𝑏 = (𝑥1 , 𝑦1 ) + (𝑥2 , 𝑦2 ) =
𝛼𝑥 𝛼𝑥
(2𝑥1 − 3𝑥2 , 𝑦1 − 𝑦2 ) and the scalar multiplication α (𝑥1 , 𝑦1 ) = ( 3 1 , 3 2 ). Show that V is not a
vector space.
6. Examine whether the following vectors 𝑅3 /𝐶 3 are linearly independent.
(i) (2,2,1), (1,-1,1),(1,0,1)
(ii) (2,i,-1), (1,-3,i), (2i,-1,5)
(iii) (1,3,4) (1,1,0),(1,4,2), (1,-2,1)
0 1 + 2𝑖
7. If A= [ ] show that (𝐼 − 𝐴) (𝐼 + 𝐴)−1 is unitary matrix.
−1 + 2𝑖 0
2𝑖 3𝑖
8. Show that A= [ ] is skew hermitian.
3𝑖 0

4 1 − 3𝑖
9. Prove that the matrix [ ] is hermitian matrix.
1 + 3𝑖 7
22
UNIT-II
MATRIX EIGENVALUE PROBLEM AND QUADRATIC FORMS

Eigenvalues & Eigenvectors Summary Eigenvalue

A. x = λ. x

Square matrix Eigenvector


The equation can be rearranged as follows:
Rearrange: Ax – λx = 0 This form is useful for
Factroise : (A – λx) =0 finding the eigenvalues
Make λ into a matrix: (A- λI) X = 0 and eigenvectors
λ 0]
(Multiplying λ by the identity matrix [ I ] produces the matrix [ )
0 λ
We can see from the last version of the equation that x=0 is a solution (trivial).
To find the eigenvalues (λ) we set the determinant of A- λI equal to 0
i.e. |𝐴 − λI| = 0 (This is because eigenvectors are non-zero)
To find the eigenvectors, put the eigenvalues back into the original equation and solve.
Geometric view of Eigen Value and Eigen Vector
Matrix A acts by stretching the vector x, not changing its direction, so X is an eigen vector of A

Y
Ax= λx
Y
λx X

23
Applications of eigenvalues and eigenvectors

1. Using singular value decomposition for image compression : This is a note expressing how you
can compress an image by throwing away the small eigenvalues of AAT . It takes an 8 megapixel
image of an Allosarus and shows how the image looks after compressing by selecting
1,10,25,50,100 and 200 of the largest singular values .
2. Deriving special relativity is more natural in the language of linear algebra : In fact ,
Einstein’s second postulate really states that @ Light is an eigenvector of the Lorentz
transformation .
3. Spectral Clustering :Whether it’s in plants and biology , medical imaging , business and
marketing , understanding the connection between fields of Facebook or even criminology ,
clustering is an extremely important part of modern data analysis . It allows people to find
important subsystems or patterns inside noisy data sets . One such method is spectral clustering
which uses the eigenvalues of the graph of a network . Even the eigenvector of the second
smallest eigenvalue of the Laplacian matrix allows us to find the two largest clusters in a network
4. Dimensionality Reduction / PCA : The principal components correspond to the largest
eigenvalues of ATA and yields the least squared projection on to a smaller dimensional hyperplane
and the eigenvectors becomes the axes of the hyperplane . Dimensionality reduction is extremely
useful in machine learning and data analysis as it allows one to understand where most of the
variation from data comes from
5. Low rank factorization for collaborative prediction : This what Netflix does to what rating
you’ll have for a movie you have not yet watched . It uses the SVD and throws awat the smallest
eigenvalues of ATA
6. The Google Page Rank Algorithm : The largest eigenvector of the graph of the internet is how
the pages are ranked

QUADRATIC FORMS
Eigenvalues and eigenvectors can be used to solve the rotation of axes problem . Recall that classifying
the graph of the quadratic equation
𝑎𝑥 2 + 𝑏𝑥𝑦 + 𝑐𝑦 2 + 𝑑𝑥 + 𝑒𝑦 + 𝑓 = 0
Is fairly straight forward as long as the equation has no xy-term ( that is b=0) . If the equation has xy-term
, however , then the classification is accomplished most easily by first performing a rotation of axes that
eliminates the xy-term . The resulting equation ( relative to the new x1 y1 axes ) will then be of the form
𝑎′ (𝑥 ′ )2 + 𝑐 ′ (𝑦 ′ )2 + 𝑑 ′ 𝑥 ′ + 𝑒 ′ 𝑦 ′ + 𝑓 ′ = 0
You will that the coefficient a1 and c1 are eigenvalues of the matrix
𝑎 𝑏/2
[ ]
𝑏/2 𝑐
The expression 𝑎𝑥 2 + 𝑏𝑥𝑦 + 𝑐𝑦 2

24
Example : Find the matrix of a quadratic form associated with each quadratic equation
a. 4𝑥 2 + 9𝑦 2 − 36 = 0 b. 13𝑥 2 − 10𝑥𝑦 + 13𝑦 2 − 72 = 0
𝑥2 𝑦2
+ =1
Solution : 32 22 -3
4 0
a) Here a=4 . b=0 and c=9 the matrix is 𝐴 = [ ]
0 9
1
13 −5]
b) Because a=13 , b= - 10 and c= 13 the matrix is 𝐴 = [ -2 -1 0 1
−5 13
2
𝑥2 𝑦2
In standard form, the equation 4x2+9y2-36=0 is + 22 = 1 Figure 1
32

Which is the equation of the ellipse shown in the figure 1 . Although it is not apparent by
inspection the graph of the equation 13𝑥 2 − 10𝑥𝑦 + 13𝑦 2 − 7 = 0 Is similar. In fact
0
when you rotate the x and y axes counter clock wise 45 to form a new 1
(𝑥 ′ )2 (𝑦 ′)2
x1 y1 – coordinate system . This equation takes of the form + =1
32 22
450

Which is the equation of the ellipse shown in figure 2


Figure 2
(𝑥 ′ )2 (𝑦 ′ )2
+ 2 =1 13𝑥 2 − 10𝑥𝑦 + 13𝑦 2
32 2
− 72 = 0

Application(one among many)


In computer science and more specifically in computer algebra when representing mathematical
objects in a computer , there are usually many different ways to represent the same object . In this
context , a canonical form is a representation such that every object has a unique representation .
Thus , the e quality of two objects can easily be tested by testing the equality of their canonical
form.
Properties of Eigen Values and Eigen Vectors
1) If A is real its eigenvalues are Real (or) Complex conjugates in pairs.
2) If A is skew Symmetric matrix then its eigenvalues are zero or purely imaginary.
3) If A is symmetric matrix with district eigenvalues say 𝜆, 𝜇, 𝜌 ,then their corresponding
eigen vectors are orthogonal to each other.
4) The eigenvalues of
a) Hermitian Matrix A are real
b) Skew Hermitian Matrix S are purely imaginary or Zero.
c) Unitary matrix U have absolute value ‘1’.

25
5) Hermitian Matrix have orthogonal eigenvectors
6) Sum Of The eigenvalues of A=Trace (A)
7) Product of eigenvalues of A=|𝐴|
8) If A is real symmetric matrix its eigenvalues are always real.

Short Answer Questions;


2 −1
1. Find the eigenvalues of A , A2 , A-1 and A+4I If A=[ ]
−1 2

Sol;-

Characteristic equation of A =|A-λI|= λ2-4λ+3=0 => λ1=1 and λ2=3


A2 , A-1 and A+4I keep the same eigenvectors as A. Their eigenvalues are λ2 , λ-1 and λ+4: A2 has
1 1
eigenvalues 12=1 and 32=9 ; A-1 has eigenvalues 1 , 3
A+4I has eigenvalues 1+4=5 and 3+4=7.

 3 7 5
Find the sum and product of the eigen values of A   2 4 3 
 1 2 2 
2.

Sol :
We knowthat the sumof theeigenvalues  trace  A .
The product of the eigen values | A |.
Let us consider the eigen values of the matrix are 1 , 2 , 3.
The sum of the eigen values  trace  A  1  2  3  3  4  2  3.
The product of the eigen values | A | 1.2 .3  3(8  6)  7(4  3)  5(0)  1.

8 4
3. Find the eigen values and eigen vectors of A   
2 2 

Sol :

26
The charectristic equation | A   I | 0
8 4
 0
2 2
(8   )(2   )  8  0
 2  10  24  0
  4, 6
The corresponding eigen ve cot rs for   4, 6.
Case 1 : The eigen ve cot r corresponding to  4
(A  I )X  0
 8  4 4  x   0 
     
 2 2  4  y   0 
 4 4  x   0 
     
 2 2  y   0 
R2  (1/ 2) R1  R2
 4 4  x   0 
     
 0 0  y   0 
4x  4 y  0
Choose y  k then x  k
 x k   1
   k 
 y k   1
Case 2 : The eigen ve cot r corresponding to   6
(A  I )X  0
 8  6 4  x   0 
     
 2 2  6  y   0 
 2 4  x   0 
     
 2 4  y   0 
R2  (1) R1  R2
 2 4  x   0 
     
 0 0  y   0 
2x  4 y  0
choose y  k then x  k / 2
 x  k   1 
    k 
 y   k / 2 1/ 2 

27
4.
If the eigen values of a square matrix Aof order 2  2 are   4 and 6,
then find the following
(i ) The eigen value of AT
(ii ) The eigen value of A1
(iii ) The eigen value of B  KA where k  1/ 2
(iv) The eigen value of A2
(v) The eigen value of B  A  KI where k  2
(vi ) The eigen value of B  A  KI where k  1
Sol :

Let 1 , 2 bethe eigen values of A .Then


(i ) The eigen value of AT  1 , 2 i.e., 6 and 4
1 1
(ii) The eigen value of A1  and  1/ 4 & 1/ 6
1 2
(iii) The eigen value of B  KA where k  1/ 2
1 1
are and  2 &  3
21 22
(iv) The eigen values of A2 are 12 , 22 i.e.,16,36
(v) The eigen value of B  A  KI where k  2
are 1  2 & 2  2 i.e., 6 and 8
(vi ) The eigen value of B  A  KI where k  1
are 1  1 & 2  1 i.e., 3 and 5

𝟑 −𝟏 𝟎
5) What is the quadratic form associated with the matrix 𝑨 = [−𝟏 𝟐 −𝟏]
𝟎 −𝟏 𝟑

Sol :
𝑥1
If X= [𝑥2 ] then
𝑥3
3 −1 0 𝑥1
𝑓(𝑋) = 𝑋 𝐴𝑋 = [𝑥1
𝑇 𝑥2 𝑥3 ] [−1 2 −1] [𝑥2 ] = 3𝑥1 2 + 2𝑥2 2 + 3𝑥3 2 − 2𝑥1 𝑥2 − 2𝑥2 𝑥3
0 −1 3 𝑥3

28
6) Find the nature of the quadratic form 𝟐𝒙𝟐 + 𝟐𝒚𝟐 + 𝟐𝒛𝟐 + 𝟐𝒚𝒛

Sol :
2 0 0
The associated symmetric matrix of the given quadratic form is 𝐴 = [0 2 1]
0 1 2
The roots of the characteristic equationdet(𝐴 − 𝜆𝐼 ) = 0 are: 1, 2, 3
All the roots are positive. So, the quadratic form is positive definite.

7) Find the index and signature of the quadratic form 3𝒙𝟐 + 𝟐𝒚𝟐 + 𝟑𝒛𝟐 − 𝟐𝒙𝒚 − 𝟐𝒚𝒛

Sol :
3 −1 0
The associated symmetric matrix of the given quadratic form is 𝐴 = [−1 2 −1]
0 −1 3
The roots of the characteristic equation det(𝐴 − 𝜆𝐼 ) = 0 are 3, 1, 4 also the sum of the squares
form is 3𝑦1 2 + 𝑦2 2 + 4𝑦3 2 .
Here r = rank(A) = 3, s= index= no. of positive terms =3, Signature = 2s – r = 3.

𝟒 𝟏
8) Find the matrix P which diagonalize the matrix 𝑨 = [ ]
𝟐 𝟑

Sol :
The characteristic equation of A is det(𝐴 − 𝜆𝐼 ) = 𝜆2 − 7𝜆 + 10 = 0
Clearly, the eigen values of Aare 𝜆 = 2 𝑎𝑛𝑑 5
The corresponding eigen vectors are
1
𝑓𝑜𝑟 𝜆 = 2, 𝑋1 = 𝑘 [ ] , 𝑎𝑛𝑑
−2
1
𝑓𝑜𝑟 𝜆 = 5 , 𝑋2 = 𝑘 [ ]
1
1 1 4 1
Therefore the modal matrix 𝑃 = [ ] which diagonalize the matrix 𝐴 = [ ]
−2 1 2 3

𝟏 𝟏
9) Diagonalize the matrix 𝑨 = [ ] over Real. Is it diagonalizable over Complex?
−𝟏 𝟏

Sol :
The characteristic equation of A is det(𝐴 − 𝜆𝐼 ) = 𝜆2 − 2𝜆 + 2 = 0
29
Therefore the Eigen roots are 𝜆 = 1 ± 𝑖
Clearly, the matrix A has no real Eigen values, so not diagonalizable over real
But it is diagonalizable over Complex, means that we can find an invertible complex matrix
1 1 1+𝑖 0
𝑃=[ ]such that 𝑃 −1 𝐴𝑃 = 𝐷 = [ ]
𝑖 −𝑖 0 1−𝑖

𝟏 −𝟐 𝟑 𝟒 −𝟓
−𝟐 𝟔 𝟎 −𝟏 𝟖
10) Is the matrix 𝑨 = 𝟑 𝟎 𝟕 −𝟒 −𝟔 orthogonally diagonalizable? Why?
𝟒 −𝟏 −𝟒 −𝟗 𝟏
[−𝟓 𝟖 −𝟔 𝟏 𝟒 ]𝟓𝒙𝟓

Sol :
We know that every symmetric matrix is orthogonally diagonalizable . The given matrix
A is symmetric of order 5x5. Hence the matrix A is orthogonally diagonalizable.

𝟓 𝟒]
11) Diagonalize the matrix 𝑨 = [ and hence find 𝑨𝟏𝟓
𝟏 𝟐

Sol :
We can easily diagonalize the given matrix of order 2x2 . The modal
4 1 6 0
matrix 𝑃 = [ ] such that 𝑃−1 𝐴𝑃 = 𝐷 = [ ] where 6 and 1 are the eigenvalues
1 −1 0 1
15
Therefore, 𝐴15 = 𝑃𝐷15 𝑃−1 = 𝑃 [6 0] 𝑃−1
0 1
𝟎 𝟏 𝟎
12. If possible, diagonalize the matrix 𝑨 = [𝟎 𝟎 𝟏]
𝟐 −𝟓 𝟒

Sol :
The Characteristic equation of A is :det(𝐴 − 𝜆𝐼 ) = 𝜆3 − 4𝜆2 + 5𝜆 − 2 = 0
The Eigenvalues are : 𝜆 = 1,1,2
1 1/4
The corresponding Eigenvectors for 𝜆1 = 𝜆2 = 1is 𝑋1 = 𝑘 [1] 𝑎𝑛𝑑 𝑓𝑜𝑟 𝜆3 = 2 𝑖𝑠 𝑋2 = 𝑘 [1/2]
1 1
Since the Eigenvalue 𝜆1 = 𝜆2 = 1 has algebraic multiplicity 2, but geometric multiplicity 1, A is
not diagonalizable.

30
Long Answer Questions

3 1 4
5. Find the eigenvalues and eigenve c tors of A  0 2 6 
0 0 5 

Sol :
Sin ce the matrix A is upper triangluar , the eigen values arethe
diagonal elements of A
The eigen values of A are 2,3 and 5
which are diagonal elements of A
The eigen vectors of A are

Case 1: for   2
(A  I)X  0
3  2 1 4   x   0  1 1 4   x   0 
 0 22 6   y   0    0 0 6   y    0 

 0 0 5  2   z  0   0 0 3   z   0 
R3  (1/ 2) R2  R3

1 1 4   x   0 
0 0 6   y   0
    
0 0 0   z   0 
6z  0  z  0
x  y  4z  0
x y 0
Choose y  k then x  k
The corresponding eigen vector for   2
 x   k   1
     
X1   y    k   k  1 
z  0  0
     
Case 2 : for   5
(A  I )X  0
 2 1 4   x  0 
 0 3 6   y   0 
    
 0 0 0   z  0 
2 x  y  4 z  0
3 y  4 z  0

31
choose z  k then y  2k , x  3k
the corresponding eigen vector for   5
 x   3k   3
     
X 2   y    2k   k  2 
z  0  1
     
Case 3 : Eigen ve c tor for   3
(A  I )X  0
0 1 4   x  0
0 1 6   y   0
    
0 0 2   z  0 
R2  R1  R2
0 1 4   x  0 
0 0 10  y   0
    
0 0 2   z  0
R3  (1/ 5) R2  R3
0 1 4   x  0
0 0 10   y   0 
    
0 0 0   z  0 
y  4z  0
z  0 then y  0
let x  k
 x  1
   
 y    0 
 z  0
   
the eigen ve c tor corresponding to   3
 2 1 4   x   0 
 0 3 6   y   0 
    
 0 0 0   z  0 
2 x  y  4 z  0 &  3 y  6 z  0

Choose z  k then y  2k , x  3k
the corresponding eigen vector for   3
 x   3k   3
     
X 2   y    2k   k  2 
z  0  1
     

32
1 0 1
Find the Eigen values and eigen vectors of A  1 2 1 
6.
 2 2 3 
Also det er min e, whether the eigen vectors are orthogonal ?

Sol :
The charecteristic equation of A is | A   I | 0
1  0 1
1 2 1 0
2 2 3
 3  6 2  11  6  0
(  1)(  2)(  3)  0
   1, 2,3 are distinct eigen values of A

The eigen ve cot r corresponding to eigen value   1:


1  1 0 1   x  0 
 1 2  1 1   y   0 
    
 2 2 3  1  z  0 
 0 0 1  x  0
1 1 1   y   0 
    
 2 2 2   z  0 
R1  R2 then R2  R3
1 1 1   x  0
 2 2 2   y   0
    
 0 0 1  z  0
R2  (1) R1  R2
1 1 1   x   0 
0 0 0   y   0
    
0 0 1  z  0
z  0  z  0
x y z 0 x y 0
choose y  k then x   k
 x  1 
   
 y  k 1 
z 0
   

33
The eigen ve c tor corresponding to eigen value   2 :
1  2 0 1   x  0
 1 22 1   y   0

 2 2 3  2   z  0 
 1 0 1  x  0
 1 0 1   y   0
    
 2 2 1   z  0
R1  R1  R2 then R3  2 R1  R3
 1 0 1  x  0 
 0 0 0   y   0 
    
 0 2 1  z  0 
x  z  0
2y  z  0
Choose z  k then x  k and y  k / 2
 x  1   2 
     
 y   k 1/ 2  (or )  1 
z  1   2
     

The eigen ve c tor corresponding to eigen value   3 :


1  3 0 1   x  0 
 1 23 1   y   0 

 2 2 3  3  z  0 
 2 0 1  x  0
1 1 1   y   0

 2 2 0   z  0
R1  R2
1 1 1   x  0
 2 0 1  y   0

 2 2 0   z  0
then R2  2 R1  R2
R3  2 R1  R3
1 1 1   x  0
0 2 1   y   0
    
0 4 2  z  0
R3  2 R2  R3

34
1 1 1   x  0 
0 2 1   y   0 
    
0 0 0   z  0 
x yz 0
2 y  z  0
Choose z  k then x  k / 2 and y  k / 2
 x  1/ 2  1
     
 y   k  1/ 2  (or )  1 
z  1   2 
     
The eigen ve c tors   1, 2,3 respectively are
 1  2  1
     
X 1   1  , X 2   1  & X 3   1 
0  2  2 
     
Sin ce X 1 X 2  3  0
T

X 2T X 3  7  0
X 3T X 1  0
the eigen vectors X 1 and X 3 only orthogonal .
1 1 
7. .Use Gram Schmidth method to makethe vectors a  1 and b  0  orthogonal.
 
1  2
Sol :
1 1 
 
Given a  1 , b   0 
1  2
Aa
AT b
B  b  T .A
A A
1
A  a  1
1
AT  1 1 1
AT b  3
AT A  3

35
1  1 1  1  0 
  3       
B  0   1  0   1   1
3
 2 1  2 1  1 
A and B are orthogonal

1 1/ 3 
1   
q1  1   1/ 3 
3  
1 1/ 3 
 
 0 
 
q2   1/ 2 
 1/ 2 
 
1/ 3 0 
 
the orthogonal matrix is Q  1/ 3 1/ 2 
 
1/ 3 1/ 2 

8.
Find an orthogonal matrix that will diagonalisethe real symmetric matrix
1 2 3
A   2 4 6  and also find the resulting diagonal matrix.
 3 6 9 
Sol :
The charecterstic equ. | A   I | 0
1  2 3
i.e., 2 4 6 0
3 6 9
 3  14 2  0
the eigen values are   0, 0,14
The eigen vector corresponding to   0
1  0 2 3   x  0 
 2 40 6   y   0 

 3 6 9  0   z  0 
1 2 3  x  0
2 4 6   y   0 

 3 6 9   z  0 

36
R2  R2  2 R1
R3  R3  3R1
1 2 3   x   0 
0 0 0  y   0
    
0 0 0   z  0
Let z  k1 , y  k2
then x  2 y  3z  0
& x  2k2  3k1
 x  3   2 
     
 y   k1  0   k2  1 
z 1 0
     
The eigen vector corresponding to   14
1  14 2 3   x  0 
 2 4  14 6   y   0 

 3 6 9  14   z  0 
 13 2 3   x  0 
 2 10 6   y   0
    
 3 6 5  z  0 
R2  13R2  2 R1
R3  13R3  3R1
 13 2 3   x  0 
 0 126 84   y   0
    
 0 84 56  z  0
R3  126 R3  84 R2
 13 2 3   x  0 
 0 126 84   y   0 
    
 0 0 0   z  0 
Let z  k
then y  2k / 3
& x  k /3
 x 1
   
 y   k / 3 2 
z  3
   

37
 3   2  1
     
let X 1   0  , X 2   1  , X 3   2 
1 0  3
     
X 1  10 ; X 2  5 : X 3  14
 3 / 10 2 / 5 1/ 14 
 
P 0 1/ 5 2 / 14 
 0 
 1/ 10 3 / 14 
 X 1 , X 2  0
 X 2 , X 3  0
 X 3 , X 1  0
a   2  1
     
let X 1   b  X 2   1  , X 3   2 
 c     
 0  3
2a  b  0
a  2b  c  0
let b  k
then a  k / 2
c  5k / 6
a  3
b   k / 6  6 
   
 c   5
3  2  1
     
let X 1   6  X 2   1  , X 3   2 
 5  0  3
   

38
 3 / 70 2 / 5 1/ 14 
 
P   6 / 70 1/ 5 2 / 14 
 
 5 / 70 0 3 / 14 
 3 / 70 6 / 70 5 / 70 
 
PT   2 / 5 1/ 5 0 
 
 1/ 14 2 / 14 3 / 14 
D  PT AP
 3 / 70 6 / 70 5 / 70  1 2 3   3 / 70 2 / 5 1/ 14 
   
 2 / 5 1/ 5 0   2 4 6   6 / 70 1/ 5 2 / 14 
   
 1/ 14 2 / 14 3 / 14   3 6 9   5 / 70 0 3 / 14 
0 0 0
D  0 0 0 
0 0 14 

𝟏 𝟔 𝟏
9. Diagonalize A= [𝟏 𝟐 𝟎]and hence find A8 . Find the modal matrix.
𝟎 𝟎 𝟑

Sol :
The Characteristic equation of A is :det(𝐴 − 𝜆𝐼 ) = 𝜆3 − 6𝜆2 + 5𝜆 + 12 = 0
The Eigenvalues are : 𝜆 = −1, 3, 4
−3
The corresponding Eigenvectors for 𝜆1 = −1 𝑖𝑠 𝑋1 = 𝑘 [ 1 ] , 𝜆2 = 3 𝑖𝑠 𝑋2 =
0
1 2
𝑘 [ 1 ] 𝑎𝑛𝑑 𝑓𝑜𝑟 𝜆3 = 4 𝑖𝑠 𝑋3 = 𝑘 [1]
−4 0
−3 1 2 −4 8 1
−1 1
The modal matrix is 𝑃 = 1 [ ]
1 1 ; 𝑃 = 20 0 [ 0 −5] Further more 𝑃−1 𝐴𝑃 =
0 −4 0 4 12 4
−1 0 0
𝐷 = [ 0 3 0] as can be easily checked.
0 0 4
26215 78642 24574
Also 𝐴8 = 𝑃𝐷8 𝑃−1 = [13107 39322 11467]
0 0 6561

39
𝟐 𝟏 𝟏
10. Orthogonally diagonalize the matrix𝑨 = [𝟏 𝟐 𝟏]
𝟏 𝟏 𝟐

Sol :
The Characteristic equation of A is :det(𝐴 − 𝜆𝐼 ) = 𝜆3 − 6𝜆2 + 9𝜆 − 4 = 0
The Eigenvalues are : 𝜆 = 1, 1, 4
1
The corresponding Eigenvectors 𝑓𝑜𝑟 𝜆1 = 4 𝑖𝑠 𝑋1 = 𝑘 [1] and for 𝜆3 = 𝜆2 = 1is 𝑋2 =
1
−1 −1
𝑘 [ 0 ] 𝑎𝑛𝑑 𝑋3 = 𝑘 [ 1 ] .We need three orthonormal Eigenvectors. First, apply the Gram-
1 0
1
−1 −1 −1 −2
Schmidt process to [ 0 ] 𝑎𝑛𝑑 [ 1 ] to obtain [ 0 ] 𝑎𝑛𝑑 [ 1 ] . The new vector,
1
1 0 1 −2
which has been

−1 1
constructed to be orthogonal to [ 0 ] and so is orthogonal to [1] .
1 1
Thus we have three mutually orthogonal vectors, and all we need to do is normalize them and
construct a matrix Q with these vectors as its columns. We find that
1/√3 −1/√2 −1/√6
𝑄 = [1/√3 0 2/√6 ]
1/√3 1/√2 −1/√6
4 0 0
And can be easily verify that 𝑄𝑇 𝐴𝑄 = 𝐷 = [0 1 0]
0 0 1

11. Find the orthogonal transformation which transforms the quadratic form 𝒙𝟏 𝟐 + 𝟑𝒙𝟐 𝟐 + 𝟑𝒙𝟑 𝟐 −
𝟐𝒙𝟐 𝒙𝟑 to a canonical form.

Sol :
The coefficient matrix A of the given quadratic form is
1 0 0
𝐴 = [0 3 −1]
0 −1 3
The characteristic equation of A is det(𝐴 − 𝜆𝐼 ) = 𝜆3 − 7𝜆2 + 14𝜆 − 8 = 0
By solving this polynomial equation we can get the latent roots: 𝜆 = 1, 2, 4
And the corresponding eigen vectors are

40
1 0 0
𝜆1 = 1 𝑖𝑠 𝑋1 = 𝑘 [0] , 𝜆2 = 2 𝑖𝑠 𝑋2 = 𝑘 [1] 𝑎𝑛𝑑 𝑓𝑜𝑟 𝜆3 = 4 𝑖𝑠 𝑋3 = 𝑘 [ 1 ]
0 1 −1

The normalized vectors of the above Eigen vectors 𝑋1 , 𝑋2 , 𝑋3 are:

1 0 0
𝑋 𝑋2 𝑋3
𝑒1 = = [0 ] , 𝑒2 = = [1/√2] 𝑎𝑛𝑑 𝑒3 = = [ 1/√2 ]
‖𝑋1 ‖ ‖𝑋2 ‖ ‖𝑋3 ‖
0 1/√2 −1/√2

Therefore, the normalized modal matrix is


1 0 0
1 √2 0 0
𝑃̂ = [0 1/√2 1/√2 ] 𝑎𝑛𝑑 𝑃̂−1 = [0 1 1]
0 1/√2 −1/√2 √2
0 1 −1

1 0 0
Further more 𝑃̂−1 𝐴𝑃̂ = 𝐷 = [0 2 0] as can be easily checked.
0 0 4
Now consider the required non-singular linear transformation 𝑋 = 𝑃̂ 𝑌 that transforms or reduces
the given quadratic form to canonical form.
We know that the quadratic form is 𝑄 = 𝑋 𝑇 𝐴𝑋 where 𝑋 = 𝑃̂ 𝑌, then
𝑇
= (𝑃̂𝑌) 𝐴(𝑃̂𝑌)
= (𝑌 𝑇 𝑃̂𝑇 )𝐴(𝑃̂ 𝑌)
= 𝑌 𝑇 (𝑃̂𝑇 𝐴𝑃̂)𝑌

= 𝑌 𝑇 (𝑃̂−1 𝐴𝑃̂)𝑌 Since 𝑃̂ is orthogonal and 𝑃̂𝑇 = 𝑃̂−1

= 𝑌 𝑇 𝐷𝑌 and is said to be the canonical form


1 0 0 𝑦1
Therefore, the canonical form = [𝑦1 𝑦2 𝑦3 ] [0 2 0] [𝑦2 ]
0 0 4 𝑦3
= 𝑦1 2 + 2𝑦2 2 + 4𝑦3 2
𝑥1 1 0 0 𝑦1
̂
Thus the required orthogonal transformation 𝑋 = [𝑥2 ] = 𝑃 𝑌 = [0 1/√2 1/√2 ] [𝑦2 ]
𝑥3 0 1/√2 −1/√2 𝑦3

12. Reduce the quadratic form 𝟑𝒙𝟏 𝟐 + 𝟑𝒙𝟐 𝟐 + 𝟑𝒙𝟑 𝟐 + 𝟐𝒙𝟏 𝒙𝟐 + 𝟐𝒙𝟏 𝒙𝟑 − 𝟐𝒙𝟐 𝒙𝟑 into sum of squares
form by an orthogonal transformation.

Sol :
The coefficient matrix A of the given quadratic form is

41
3 1 1
𝐴 = [1 3 −1]
1 −1 3
The characteristic equation of A is det(𝐴 − 𝜆𝐼 ) = 𝜆3 − 9𝜆2 + 24𝜆 − 16 = 0
By solving this polynomial equation we can get the latent roots: 𝜆 = 1, 4, 4
And the corresponding eigen vectors are
−1 1 1
𝜆1 = 1 𝑖𝑠 𝑋1 = 𝑘 [ 1 ] , 𝜆2 = 4 𝑖𝑠 𝑋2 𝑎𝑛𝑑 𝑋3 = 𝑘1 [0] + 𝑘2 [1]
1 1 0
Clearly, the vectors 𝑋1 , 𝑋3 𝑎𝑛𝑑 𝑋1 , 𝑋2 are pair wise orthogonal but, the vectors 𝑋2 , 𝑋3 are not
−1
pair wise orthogonal. Consider 𝑢1 = 𝑋1 = [ 1 ]
1
1
Using Gram-Schmidt process we can find an orthonormal vectors 𝑢2 = [0] 𝑎𝑛𝑑
1
1/2
𝑢3 = [ 1 ]
−1/2
The normalized vectors of the above Eigen vectors 𝑢1 , 𝑢2 , 𝑢3 are:

−1/√3 1/√2 1/√6


𝑢1 𝑢2 𝑢3
𝑒1 = = [ 1/√3 ] , 𝑒2 = = [ 0 ] 𝑎𝑛𝑑 𝑒3 = = [ √2⁄3 ]
‖𝑢1 ‖ ‖𝑢2 ‖ ‖𝑢3 ‖
1/√3 1/√2 −1/√6

Therefore, the normalized modal matrix is


−1/√3 1/√2 1/√6
𝑃̂ = [ 1/√3 0 √2⁄3 ]
1/√3 1/√2 −1/√6

1 0 0
Further more 𝑃̂−1 𝐴𝑃̂ = 𝐷 = [0 4 0] as can be easily checked.
0 0 4
Now consider the required non-singular linear transformation 𝑋 = 𝑃̂ 𝑌 that transforms or reduces
the given quadratic form to canonical form.

We know that the quadratic form is 𝑄 = 𝑋 𝑇 𝐴𝑋 where 𝑋 = 𝑃̂ 𝑌, then


𝑇
= (𝑃̂𝑌) 𝐴(𝑃̂𝑌)
= (𝑌 𝑇 𝑃̂𝑇 )𝐴(𝑃̂ 𝑌)
= 𝑌 𝑇 (𝑃̂𝑇 𝐴𝑃̂)𝑌

= 𝑌 𝑇 (𝑃̂−1 𝐴𝑃̂)𝑌 Since 𝑃̂ is orthogonal and 𝑃̂𝑇 = 𝑃̂−1

= 𝑌 𝑇 𝐷𝑌 and is said to be the canonical form

42
1 0 0 𝑦1
therefore, the canonical form = [𝑦1 𝑦2 𝑦3 ] [0 4 0] [𝑦2 ]
0 0 4 𝑦3
= 𝑦1 2 + 4𝑦2 2 + 4𝑦3 2

2 3 + 4𝑖
13. Find the Eigen values and Eigen Vectors of 𝐴 = [ ]
3 − 4𝑖 2

Sol :

|𝐴 − 𝜆𝐼 | = | 2 − 𝜆 3 + 4𝑖 |
=0
3 − 4𝑖 2−𝜆

𝜆2 − 4𝜆 + (−2𝑖 ) = 0

𝜆 = −3,7
( Eigen Values of Hermitian Matrix is real)
For 𝜆 = −3

3 + 4𝑖 ⌉ ⌈𝑥1 ⌉
⌈ 5
3+4𝑖
𝑥 = 0 ==> 𝑥1 = − ( 5 ) 𝑥2
3 − 4𝑖 5 2

The Eigen Vector corresponding to 𝜆 = −3 is


3+4𝑖
−( ) 𝑥2 ] 3 − 4𝑖
𝑋1 = [ 5 let 𝑥2 = 5 ==> 𝑋1 = [ ]
𝑥2 5
For 𝜆 = 7
3 + 4𝑖
𝑋2 = [ ] [𝑋1 .𝑋2 =0, Orthogonal to each other]
5

Exercise
10. Find the eigenvector corresponding to the largest Eigen value of the matrix
4 3 1
𝐴 = [0 5 2 ]
0 0 8
8  4 
11. The Eigen values and Eigen vectors of B  2 A  (1 / 2) A  3I , where A  
2
.
2 2 
12. Prove that the Eigenvalue of a skew Hermitian matrix are purely imaginary or zero

13. Diagonalize the following matrices.

43
11  4  7
(i) A   7  2  5 and hence find A 5 .
10  4  6
7 −2 1
14. Diagonalize the matrix 𝐴 = [−2 10 −2]
1 −2 7
15. Determine the nature of the quadratic form Q(X) = 17 x2 – 30 x y + 17 z2

16. Find the signature of the quadratic form Q(X) = 6x2 – 4 x y +2 y2

17. Reduce the quadratic form to sum of squares form (canonical form) and find the corresponding
linear transformation. Also find the index and signature.
(a) 6x12  3x22  3x32  4x1 x2  2x2 x3  4x1 x3 .
(b) 4x2 + 3y2 + z2 – xy – 6yz + 4xz. [ ans 4y12 – y22 + y32]

−𝑖 0 0
18. Find the eigen values and Eigen Vectors of 𝐴 = [ 0 0 −𝑖 ]
0 −𝑖 0
0 0 1
Ans: 𝜆 = −𝑖, 𝑖, 𝑖 𝑋1 = ⌈ 1 ⌉ 𝑋2 = ⌈1⌉ 𝑋3 = ⌈0⌉
−1 1 0

44
Unit-III
Matrix Decomposition and Least squares solution of algebraic system

LU Decomposition:

Let A be a square matrix. An LU factorization refers to the factorization of A, with proper row and/or
column orderings or permutations, into two factors – a lower triangular matrix L and an upper triangular
matrix U: A=LU

In the lower triangular matrix all elements above the diagonal are zero, in the upper triangular matrix, all
the elements below the diagonal are zero. For example, for a 3 × 3 matrix A, its LU decomposition looks
like this:

𝑎11 𝑎12 𝑎13 𝑙11 0 0 𝑢11 𝑢12 𝑢13


[𝑎21 𝑎22 𝑎23 ] = [𝑙21 𝑙22 0 ][ 0 𝑢22 𝑢23 ]
𝑎31 𝑎32 𝑎33 𝑙31 𝑙32 𝑙33 0 0 𝑢33

Without a proper ordering or permutations in the matrix, the factorization may fail to materialize. For
example, it is easy to verify (by expanding the matrix multiplication) that 𝑎11 = 𝑙11 𝑢11 . If 𝑎11 = 0, then
at least one of 𝑙11 and 𝑢11 has to be zero, which implies that either L or U is singular. This is impossible if
A is nonsingular (invertible). This is a procedural problem. It can be removed by simply reordering the
rows of A so that the first element of the permuted matrix is nonzero. The same problem in subsequent
factorization steps can be removed the same way

1. Solve the following equations using (LU decomposition method) Crout's method.
𝒙𝟏 + 𝒙𝟐 + 𝒙𝟑 = 𝟏, 𝟒𝒙𝟏 + 𝟑𝒙𝟐 − 𝒙𝟑 = 𝟔 & 𝟑𝒙𝟏 + 𝟓𝒙𝟐 + 𝟑𝒙𝟑 = 𝟒

Solution:
The given system of equations can be written as AX=B
1 1 1 𝑥1 1
where A=[4 3 −1] ; 𝑋 = [𝑥2 ] & 𝐵 = [6]
3 5 3 𝑥3 4
Using the LU decomposition method, Choose LU=A
𝑙11 0 0 1 𝑢12 𝑢13
where 𝐿 = [𝑙21 𝑙22 0 ] 𝑎𝑛𝑑 𝑈 = [0 1 𝑢23 ]
𝑙31 𝑙32 𝑙33 0 0 1
𝐿𝑈 = 𝐴
𝑙11 0 0 1 𝑢12 𝑢13 1 1 1
[𝑙21 𝑙22 0 ] [0 1 𝑢23 ] = [4 3 −1]
𝑙31 𝑙32 𝑙33 0 0 1 3 5 3
45
Multiplying and comparing the corresponding elements then we get
𝑙11 = 1; 𝑙11 𝑢12 = 1 ⇒ 𝑢12 = 1 ; 𝑙11 𝑢13 = 1 ⇒ 𝑢13 = 1; 𝑙21 = 4;
𝑙21 𝑢12 + 𝑙22 = 3 ⇒ 𝑙22 = −1; 𝑙21 𝑢13 + 𝑙22 𝑢23 = −1 ⇒ 𝑢23 = −10;
𝑙31 = 3; 𝑙31 𝑢12 + 𝑙32 = 5 ⇒ 𝑙32 = −1; 𝑙31 𝑢13 + 𝑙32 𝑢23 + 𝑙33 = 3 ⇒ 𝑙33 = −10;

1 0 0 1 1 1
𝐿 = [4 −1 0 ] 𝑎𝑛𝑑 𝑈 = [0 1 5] ;
3 2 −10 0 0 1

Now consider 𝐿𝑌 = 𝐵
1 0 0 𝑦1 1
[4 −1 0 ] [𝑦2 ] = [6]
3 2 −10 𝑦3 4
1
Comparing the corresponding elements, we get 𝑦1 = 1 , 𝑦2 = −2 𝑎𝑛𝑑 𝑦3 = −
2
1
𝑌 = [−21]
−2
Now considering the matrix equation 𝑈𝑋 = 𝑌, 𝑤𝑒 𝑔𝑒𝑡
1
1 1 1 𝑥1
−2
[0 1 5 ] [𝑥 2 ] = [ 1 ]
0 0 1 𝑥3 −
2
1 1
By solving we get 𝑥1 = 1, 𝑥2 = 2 & 𝑥3 = − 2

2. Using the Choleski method, solve the system of equations.


𝟒𝒙 − 𝒚 − 𝒛 = 𝟑;
−𝒙 + 𝟒𝒚 − 𝟑𝒛 = −𝟎. 𝟓
−𝒙 − 𝟑𝒚 + 𝟓𝒛 = 𝟎

Solution:
The given system of equations are
4𝑥 − 𝑦 − 𝑧 = 3; −𝑥 + 4𝑦 − 3𝑧 = −0.5 ; −𝑥 − 3𝑦 + 5𝑧 = 0
4 −1 −1 𝑥 3
Then 𝐴 = [−1 4 −3] ; 𝑋 = [𝑦 ] & 𝐵 = [−0.5]
−1 −3 5 𝑧 0

𝐿𝑒𝑡 𝐴 = 𝐿𝑈

𝑙11 0 0 𝑙11 𝑙21 𝑙31 4 −1 −1


[𝑙21 𝑙22 0 ][ 0 𝑙22 𝑙32 ] = [−1 4 −3]
𝑙31 𝑙32 𝑙33 0 0 𝑙33 −1 −3 5
46
Multiplying and comparing the corresponding elements then we get
2 1 1
𝑙11 = 4 ⇒ 𝑙11 = 2 ; 𝑙11 𝑙21 = −1 ⇒ 𝑙21 = − 2 ; 𝑙11 𝑙31 = −1 ⇒ 𝑙31 = − 2 ;
2 2 √15 13
𝑙11 + 𝑙22 = −1 ⇒ 𝑙22 = ; 𝑙21 𝑙31 + 𝑙22 𝑙32 = −3 ⇒ 𝑙32 = −
2 2√15
2 2 2 √29
𝑙11 + 𝑙22 + 𝑙33= 5 ⇒ 𝑙32 = ;
15
2 0 0
1 √15
− 0
𝐿= 2 2
1 13 √29
− −
[ 2 2√15 15 ]
𝐿𝑍 = 𝐵
2 0 0
1 √15
− 𝑍1
0 3
2 2 [𝑍2 ] = [−0.5]
1 13 29 𝑍3 0
− − √
[ 2 2√15 15]
3 1 58 29
By forward substitution we get 𝑧1 = 2 ; 𝑧2 = 2√15 𝑎𝑛𝑑 𝑧3 = 60 √15
By solving 𝐿𝑇 𝑋 = 𝑍, we get
1 1 3
2 − −
2 2 2
√15 13 𝑥 1
0 −
2 2√15 [𝑦] = 2√15
𝑧
29 58 15
0 0 √ √
[ 15 ] [ 60 29 ]
1 1
By backward substitution , we get 𝑥 = 1; 𝑦 = 2 𝑎𝑛𝑑 𝑧 = 2

Over determined system:


In mathematics, a system of equations is considered over determined if there are more equations than
unknowns. An over determined system is almost always inconsistent (it has no solution) when
constructed with random coefficients. However, an over determined system will have solutions in some
cases, for example if some equation occurs several times in the system, or if some equations are linear
combinations of the others.

1) Solve x + y = 1,x + 2y = 2, x + 3y = 2
1 1 1
Sol: Here A = [1 2] and B = [2]
1 3 2
47
1 1 𝑥 1
[1 2] [𝑦 ]= [2]
1 3 𝑧 2
1 1 1
[𝐴𝐵] = [1 2 2]
1 3 2
𝑅2 → 𝑅2 − 𝑅1 , 𝑅3 → 𝑅3 − 𝑅1
1 1 1
[𝐴𝐵] ~ [0 1 1]
0 2 1
𝑅3 → 𝑅3 − 2𝑅2
1 1 1
[𝐴𝐵] ~ [0 1 1]
0 0 −1
Here 𝜌(𝐴) = 2 𝑎𝑛𝑑 𝜌(𝐵) = 3
y=a+bx
a+b=1
a + 2 b =2
a+3b=2

To project 𝑏̅ on to 𝑎̅ ,we will take projection 𝑝̅ on to 𝑎̅ at right angle, then error e = b – p and clearly p is
𝑎 then we have 𝑎̅ ⊥ e then 𝑎𝑇 𝑒 = 0.(<a,e> = 0)
some multiple of 𝑎̅ . let 𝑝̅ = x̅̅̅
i.e. 𝑎𝑇 (b – a x) = 0
𝑎𝑇 b – 𝑎𝑇 a x = 0
𝑎𝑇b
x = 𝑎𝑇 a b
Then we have the projection 𝑝̅ = 𝑥 𝑎̅ e=b-p
𝑎𝑇b
P = 𝑎𝑇a 𝑎̅

Where P is projection a p
Similarly for 2 vectors in a plane ,we have
P = 𝑥̂ 𝑎1 + 𝑥̂ 𝑎2
P = 𝑥̂ 𝐴
𝑎1 ⊥ e ⇒ 𝑎1 𝑇 e = 0
𝑎2 ⊥ e ⇒ 𝑎2 𝑇 e = 0

48
𝑎1 𝑇 0
[ [ ]
𝑇 ] 𝑒 = [0]
𝑎2
𝐴𝑇 ( 𝐵 − 𝐴 𝑥̂ ) = 0
𝐴𝑇 𝐵 − 𝐴𝑇 𝐴𝑥̂ = 0
𝐴𝑇 B
𝑥̂ = 𝑇
𝐴 A
𝐴𝑇 B
P = 𝐴𝑇 A A

P = (𝐴𝑇 A)−1 (𝐴𝑇 B)A


To solve the over determined problems instead of solving Ax = B we solve A𝑥̂ = p
i.e. we solve 𝐴𝑇 A 𝑥̂ = 𝐴𝑇 B
If B is the column space then PB = B
If B is perpendicular to column space then PB = O
1 1 𝑥 1
Q) Solve [1 2] [𝑦]= [2]
1 3 𝑧 2
1 1 1
Sol: Here A = [1 2] and B = [2]
1 3 2
For over determined problem we should solve 𝐴𝑇 A 𝑥̂ = 𝐴𝑇 B ……… .(1)
1 1
1 1 1 3 6
Now 𝐴𝑇 A = [ ] [1 2] = [ ]
1 2 3 6 14
1 3
1
1 1 1
𝐴𝑇 B = [ ] [2 ] = [ 5 ]
1 2 3 11
2
3 6 𝑥̂ 5
From (1) [ ][ ] = [ ]
6 14 𝑦̂ 11
3 6 5]
Augmented matrix [𝐴𝐵] = [
6 14 11
3 6 5]
𝑅2 → 𝑅2 − 2 𝑅1 , [𝐴𝐵] ∼ [
0 2 1
⇒ 3 𝑥̂ + 6𝑦̂ = 5 and 2𝑦̂ = 1 ⇒ 𝑦̂ = 1/2 then 𝑥̂ = 2/3
Hence the solution is 𝑥̂ = 2/3 , 𝑦̂ = 1/2

49
Example : Solve x = 3 , x + y = 4 , x + 2y = 1.
1 0 3
Sol : Here A = [1 1] and B = [4]
1 2 1
For over determined problem we should solve 𝐴𝑇 A 𝑥̂ = 𝐴𝑇 B ……… .(1)
1 0
1 1 1 3 3
Now 𝐴𝑇 A = [ ] [1 1] = [ ]
0 1 2 3 5
1 2
3
1 1 1 8
𝐴𝑇 B = [ ] [4 ] = [ ]
0 1 2 6
1
3 3 𝑥̂ 8
From (1) [ ][ ] = [ ]
3 5 𝑦
̂ 6
3 3 8
Augmented matrix [𝐴𝐵] = [ ]
3 5 6
3 3 8
𝑅2 → 𝑅2 − 𝑅1 , [𝐴𝐵] ∼ [ ]
0 2 −2
⇒ 3 𝑥̂ + 3𝑦̂ = 8 and 2𝑦̂ = −2 ⇒ 𝑦̂ = −1 then 𝑥̂ = 11/3
Hence the solution is 𝑥̂ = 11/3 , 𝑦̂ = −1

Q-R FACTORIZATION

Any real square matrix A may be decomposed as A=QR, Where Q is an orthogonal matrix (its columns
are orthogonal unit vectors) and R is an upper triangular matrix (also called right triangular matrix). If A
is invertible, then the factorization is unique if we require the diagonal elements of R to be positive.

If instead A is a complex square matrix, then there is a decomposition A = QR where Q is a unitary matrix
.

If A has n linearly independent columns, then the first n columns of Q form an orthonormal basis for the
column space of A. More generally, the first k columns of Q form an orthonormal basis for the span of the
first k columns of A for any 1 ≤ k ≤ n. The fact that any column k of A only depends on the first k columns
of Q is responsible for the triangular form of R.

Let A be the matrix with independent columns, then A can be written as QR where Q is an orthogonal
matrix and R is an upper triangular matrix.
𝑞1 𝑇 a 𝑞1 𝑇 𝑏 𝑞1 𝑇 𝑐
Where R = [ 0 𝑞2 𝑇 𝑏 𝑞2 𝑇 𝑐 ]
0 0 𝑞3 𝑇 𝑐

50
1 −1 4
Example 1 : Find the QR factorization for A = [ 1 4 − 2 ]
1 4 2
1 −1 0
1 −1 4
1 4 −2
Sol: Let a = [ ] , b = [ ] , c = [ ]
1 4 2
1 1 0
1
1
A=a =[ ]
1
1
−1 1
4 1
𝐴𝑇 𝑏 = [1 1 1 1] [ ] = 6 and 𝐴𝑇 𝐴 = [1 1 1 1] [ ] = 4
4 1
−1 1
5
−2
−1 1 5
𝐴𝑇 b 4 3 1 2
B= b - 𝐴𝑇 A A=[ ]−2[ ]= 5
4 1
2
−1 1 5
[− 2 ]

4 4
−2 5 5 5 5 −2
𝐴𝑇 𝑐 = [1 1 1 1] [ ] = 4 , 𝐵𝑇 𝑐 = [− 2 − 2 − 2 − 2] [ ] = -10 ,
2 2
−0 0
−5/2
5 5 5 5/2
𝐵𝑇 𝐵 = [− 2 − 2 − 2 − 5/2] [ ] = 25
5/2
−5/2
4 1 −5/2 4 −1 1 2
𝐴𝑇 c 𝐵𝑇c −2 1 (−10) 5/2 −2 1 1 −2
C = c - 𝐴𝑇 A A - 𝐵𝑇B B = [ ] - [ ] – 25 [ ] = [ ] + [ ] –[ ] = [ ]
2 1 5/2 2 1 1 2
0 1 −5/2 0 −1 1 −2

1 −5/2 2
1 5/2 −2
A=[ ] B=[ ] C= [ ]
1 5/2 2
1 −5/2 −2
Here ∥ 𝐴 ∥ = 2 , ∥ 𝐵 ∥ = 5 , ∥𝐶∥=4

51
1/2 −1/2 1/2
1/2 1/2 −1/2
𝑞1 = [ ] , 𝑞2 = [ ] , 𝑞3 = [ ]
1/2 1/2 1/2
1/2 −1/2 −1/2
1/2 −1/2 1/2
1/2 1/2 −1/2
Q=[ ]
1/2 1/2 1/2
1/2 −1/2 −1/2
𝑞1 𝑇 a 𝑞1 𝑇 𝑏 𝑞1 𝑇 𝑐
R=[ 0 𝑞2 𝑇 𝑏 𝑞2 𝑇 𝑐 ]
0 0 𝑞3 𝑇 𝑐

1
1 1 1 1 1
𝑞1 𝑇 a = [2 ] [ ]=2
2 2 2 1
1
−1
1 1 1 1 4
𝑞1 𝑇 𝑏 = [2 ][ ] = 3
2 2 2 4
−1
4
1 1 1 1 −2
𝑞1 𝑇 𝑐 = [2 ][ ] = 2
2 2 2 2
0
𝟏 𝟐 𝟐
Example 2 : Find the QR-Factorization of 𝑨=[−𝟏 𝟏 𝟐]
−𝟏 𝟎 𝟏
𝟏 𝟏 𝟐

Sol : First we note that the columns {x1, x2, x3} of the matrix A are linearly independent set.
Use Gram-Schmidt process to find an orthonormal vectors {u1, u2, u3} by using {x1, x2, x3} vectors.
Then we get

1 3/2 −1/2
−1 3/2
𝑢1 = [ ] , 𝑢1 = [ ] 𝑎𝑛𝑑 𝑢1 = [ 0 ]
−1 1/2 1/2
1 1/2 1
Then normalize these vectors 𝑢1 , 𝑢2 𝑎𝑛𝑑 𝑢3 we get

52
1/2 3/√5 −1/√6
𝑢1 −1/2 𝑢2 3/√5 𝑢3 0
𝑒1 = =[ ], 𝑒2 = = 𝑎𝑛𝑑 𝑒3 = =
‖𝑢1 ‖ −1/2 ‖𝑢2 ‖ 1/√5 ‖𝑢3 ‖ 1/√6
1/2 [ 2/√6 ]
[1/√5]

1/2 3/√5−1/√6
−1/2 3/√5 0
So, Q =
−1/2 1/√5 1/√6
[ 1/2 1/√5 2/√6 ]
Let A = QR be the required factorization where Q is orthogonal matrix
(QTQ= I)and R is an invertible upper triangular matrix i.e., A = QR
Therefore, QTA= QT (QR) = (QTQ)R = IR = R
1/2 −1/2−1/2 1/2 1 2 2 2 1 1/2
𝑇
Then we compute 𝑅 = 𝑄 𝐴 = [ 3/√5 3/√5 1/√5 1/√5] [−1 1 2] = [0 √5 3√5/2]
−1 0 1
−1/√6 0 1/√6 2/√6 1 1 2 0 0 √3/2

1/2 3/√5−1/√6
2 1 1/2
−1/2 3/√5 0
∴ 𝐴 = 𝑄𝑅 = [0 √5 3√5/2]
−1/2 1/√5 1/√6
1/2 0 0 √3/2
[ 1/√5 2/√6 ]

𝟏 −𝟏 𝟐
Example 3 : Find the QR-factorization of 𝑨 = [𝟎 𝟏 𝟑]
𝟑 −𝟑 𝟒

Sol : First we note that the columns {x1, x2, x3} of the matrix A are linearly independent set.
Use Gram-Schmidt process to find an orthonormal vectors {u1, u2, u3} by using {x1, x2, x3} vectors.
Then we get

1 0 3/5
𝑢1 = [0] , 𝑢1 = [1] 𝑎𝑛𝑑 𝑢1 = [ 0 ]
3 0 −1/5
Then normalize these vectors 𝑢1 , 𝑢2 𝑎𝑛𝑑 𝑢3 we get

𝑢1 1/2 𝑢2 0 𝑢3 3/√10
𝑒1 = = [ 0 ], 𝑒2 = = [1] 𝑎𝑛𝑑 𝑒3 = = [ 0 ]
‖𝑢1 ‖ ‖ 𝑢2 ‖ ‖𝑢3 ‖
3/2 0 −1/√10
53
1/2 0 3/√10
So, Q = [ 0 1 0 ]
3/2 0 −1/√10

Consider A is factorized into product of two matrices Q and R where Q is orthogonal


matrix(QTQ= I) and R is an invertible upper triangular matrix i.e., A = QR
Therefore, QTA= QT (QR) = (QTQ)R = IR = R
Then we compute 𝑅 = 𝑄𝑇 𝐴

1/20 3/√10
∴ 𝐴 = 𝑄𝑅 = [ 0 1 0 ]
3/20−1/√10

SINGULAR VALUE DECOMPOSITION:

Suppose M is a m × n matrix whose entries come from the field K, which is either the field of real
numbers or the field of complex numbers. Then there exists a factorization, called a singular value
decomposition of M, of the form

Where

 U is an m × m unitary matrix over K (if K = R , unitary matrices are orthogonal matrices),


 Σ is a diagonal m × n matrix with non-negative real numbers on the diagonal,
 V is an n × n unitary matrix over K, and V∗ is the conjugate transpose of V.

The diagonal entries σi of Σ are known as the singular values of M. A common convention is to list the
singular values in descending order. In this case, the diagonal matrix, Σ, is uniquely determined by M
(though not the matrices U and V if M is not square).

𝟏 𝟏 𝟎
1. Find the singular value decomposition for the matrix 𝑨 = [ ]
𝟎 𝟎 𝟏

Sol : First compute


1 1 0
𝐴𝑇 𝐴 = [1 1 0] 𝑎𝑛𝑑 𝑓𝑖𝑛𝑑 𝑡ℎ𝑎𝑡 𝑖𝑡𝑠 𝑒𝑖𝑔𝑒𝑛 𝑣𝑎𝑙𝑢𝑒𝑠 𝑎𝑟𝑒 𝜆1 = 2, 𝜆2 = 1 𝑎𝑛𝑑 𝜆3 = 0
0 0 1
with corresponding eigen vectors
1 0 −1
[1] , [0] and [ 1 ]
0 1 0
(Verify this) These vectors are orthogonal, so we normalize them to obtain
54
1/√2 0 −1/√2
𝑣1 = [1/√2] , 𝑣2 = [0] and 𝑣3 = [ 1/√2 ]
0 1 0
And the singular values of A are 𝜎1 = √2, 𝜎2 = √1 𝑎𝑛𝑑 𝜎3 = √0

1/√20−1/√2
Thus 𝑉 = [1/√20 1/√2 ] and Σ = [√2 0 0]
0 1 0
0 1 0
To find U, we compute
1 1 1 0
𝑢1 = 𝜎 𝐴𝑣1 = [ ] and 𝑢2 = 𝜎 𝐴𝑣2 = [ ]
1 0 2 1
Therefore,
1 0
𝑈=[ ]
0 1
We can easily verifies that, 𝑈Σ𝑉 𝑇 = 𝐴
𝑇
1/√20−1/√2
1 0 √2 0 0] [ 1 1 0
𝑈Σ𝑉 𝑇 = [ ][ 1/√20 1/√2 ] = [ ]=𝐴
0 1 0 1 0 0 0 1
0 1 0
𝟏 𝟏
2. Find the singular value decomposition for the matrix 𝑨 = [𝟏 𝟎]
𝟎 𝟏

2 1
Sol :Compute 𝐴𝑇 𝐴 = [ ] has eigen values 𝜆1 = 3, 𝜆2 = 1 and the corresponding eigen
1 2
1 −1
vectors are [ ] [ ]
1 1
The singular values of A are 𝜎1 = √𝜆1 = √3, 𝜎2 = √𝜆1 = √1
1/√2 −1/√2
Normalized vectors are 𝑣1 = [ ] , 𝑣2 = [ ]
1/√2 1/√2
1/√2−1/√2 √3 0
So Thus 𝑉 = [ ] and Σ = [ 0 1]
1/√2 1/√2
0 0
2/√6 0
1 1
Also we can find 𝑢1 = 𝜎 𝐴𝑣1 = [1/√6] and 𝑢2 = 𝜎 𝐴𝑣2 = [−1/√2] and using Gram-
1 2
1/√6 1/√2
−1/√3
Schmidt process we can find 𝑢3 = [ 1/√3 ] which is orthogonal to 𝑢1 𝑎𝑛𝑑 𝑢2 and take
1/√3
matrix

55
2/√6 0 −1/√3
𝑈 = [1/√6−1/√2 1/√3 ]
1/√6 1/√2 1/√3
Hence verify that

2/√6 0 −1/√3 √3 0 𝑇 1 1
1/√2−1/√2
𝑈Σ𝑉 = [1/√6−1/√2 1/√3 ] [ 0 1] [
𝑇 ] = [1 0] = 𝐴
1/√2 1/√2 0 1
1/√6 1/√2 1/√3 0 0

MOORE-PENROSE PSEUDO INVERSE ( Generalised Inverse )


In mathematics, and in particular linear algebra, a pseudoinverse A+ of a matrix A is a generalization of
the inverse matrix . The most widely known type of matrix pseudoinverse is the Moore–Penrose
pseudoinverse, A common use of the pseudoinverse is to compute a 'best fit' (least squares) solution to
a system of linear equations that lacks a unique solution . The pseudoinverse is defined and unique for all
matrices whose entries are real or complex numbers. Matrix inverse exists for square matrices only .Real
world data is not always square . Further more real world data is not always consistent and might contain
repetitions. To deal with real world data generalized inverse for rectangular matrix is needed . It can be
computed using the singular value decomposition also
Pseudoinverse looks like this:

If the columns of a matrix A are linearly independent, so AT· A is invertible and we obtain with the
following formula the pseudo inverse:

A+ = (AT · A)-1 · AT
However, if the rows of the matrix are linearly independent, we obtain the pseudo inverse with the
formula:

A+ = AT· (A · A T) -1

1 2 1 3 
Example : Find the pseudoinverse of A   
 4 3 2 1

Sol :
1 2 1 3 
A 
 4 3 2 1
1 2
let  3  8  5  0
4 3
rank ( A)  2  no. of rows
56
then pseudo inverse is A  AT ( AAT ) 1
1 2 1 3 
A 
 4 3 2 1
1 4 
2 3
AT   
1 2 
 
3 1
1 4
 3  15 15 
1 2 1 3   2
A. A  
T
 
 4 3 2 1 1 2  15 30 
 
3 1
| A. AT | 15(30  15)  225
1 30 15  2 /15 1/15
( A. AT ) 1  
225 15 15  1/15 1/15 
1 4   2 /15 3 /15 
 
2 3   2 /15 1/15  1/15 1/15 
AT ( A. AT )  
1

1 2   1/15 1/15   0 1/15 
   
3 1  5 /15 2 /15
 2 /15 3 /15 
 1/15 1/15 

A  
 0 1/15 
 
 5 /15 2 /15

Exercise
3 −1
1. Find the matrix of singular values of the matrix 𝐴 = [ ]
2 4
4 4
2. Find the matrix of singular values of the matrix 𝐴 = [ ]
−3 3
1 1 0
3. Perform a full SVD (singular value decomposition) of the matrix 𝐴 = [ ]
0 1 1
3 1 1
4. Perform a full SVD (singular value decomposition) of the matrix 𝐴 = [ ]
−1 3 1
1 2
5. Compute the matrix cos(𝐴) for the matrix 𝐴 = [ ] use 2 decimal approximation
3 2
6. Determine the nature of the quadratic form Q(X) = 17 x2 – 30 x y + 17 z2

57
1 1 1
1 −1 2
7. Perform a QR factorization of the matrix 𝐴 = [ ] by the Gram Schmidt process
−1 1 0
1 5 1
1 2 2
−1 1 2
8. Perform a QR factorization of the matrix 𝐴 = [ ] by the Gram Schmidt process
−1 0 1
1 1 2
1 2
9. Find the Moore-Penrose pseudo-inverse of the matrix 𝐴 = [2 1 ]
1 −1
1 2 𝑥 5
10. Find the least squares approximate solution of the over determined system [2 1 ] [𝑦 ] = [ 4 ]
1 −1 𝑧 −1
1 1 𝑥 1
11. Find the least squares approximate solution of the over determined system [1 2] [𝑦] = [2]
1 3 𝑧 2

12. Solve the following equations using (LU decomposition method)


(i) Crout's method and
(ii) Dolittle's method
10𝑥 + 7𝑦 + 8𝑧 + 7𝑤 = 32; 7𝑥 + 5𝑦 + 6𝑧 + 5𝑤 = 23; 8𝑥 + 6𝑦 + 10𝑧 + 9𝑤 = 33 & 7𝑥 +
5𝑦 + 9𝑧 + 10𝑤 = 31.

13. Using Choleski method solve the system of equations 16𝑥 + 4𝑦 + 4𝑧 − 4𝑤 = 32; 4𝑥 + 10𝑦 +
4𝑧 + 2𝑤 = 26; 4𝑥 + 4𝑦 + 6𝑧 − 2𝑤 = 20; −4𝑥 + 2𝑦 − 2𝑧 + 4𝑤 = −6.

58
Unit - IV
Multivariable differential calculus and Function Optimization

Partial Differentiation
Introduction: In mathematics, sometimes the function depends on two or more variables. In this case the
concept of partial derivative arises. Generally partial derivatives are used in vector calculus and
differential geometry.
Functions of Two Variables
If there are 3 variables, say x, y, z and the value of 𝑧 depends upon the value of x, y , then 𝑧 is called a
function of two variables 𝑥 and y .

It is denoted by z  f ( x, y ) .Here z is dependent variable. x, y arethe independent variables. Such a


function can be visualized as a surface in 3 dimensions.
Example
The volume of a cylindrical cone of radius r and height h is given by the formula v   r 2 h
Hence v is a function of two variables r and h. V is the dependent variable while r , h are independent
variables.
Partial Derivatives of First Order
Let f ( x, y ) be a function of two independent variables x and y
If y is kept constant and x alone is allowed to vary then z becomes a function of x only.

The derivative of z with respect to x treating y as constant is called partial derivative of z with respect to
x.

 z  z f ( x  h, y )  f ( x, y )
It is denoted by   or z x and defined as  z x  hlt0

 x x h

59
 z 
Similarly the derivative of z with respective to y treating x as constant is denoted by   or z y and
 y 
defined as
𝜕𝑧 𝑓(𝑥, 𝑦 + 𝑘) − 𝑓(𝑥, 𝑦)
= 𝑧𝑦 = lim
𝜕𝑦 𝑘→0 𝑘

 z z
, are called first order partial derivatives of z
 x y

Notation
The second order partial derivatives of 𝑧 = 𝑓(𝑥, 𝑦) are

 2 z   z   2 z   z 
zxx     , z     ,
x 2 x  x  yy
y 2 y  y 

2 z   z  2 z   z 
z xy     , z yx    
xy x  y  yx y  x 

𝑧𝑥 𝑦 = 𝑧𝑦 𝑥 can be assumed for most functions that arise in engineering applications.

Composite Functions
Chain Rule
z z u u
If z  f (u ) where u is a function of variables x and y then   f (u )
x u x x

z z u u
Similarly   f (u )
y u y y
Composite function of one variable ( Total differential coefficient)
Let u  f ( x, y ) where x   (t ) , y   (t ) then u is function in t and is called composite function of a
du u x u y
single variable t and   is called total differential of u
dt x t y t
Similarly
If u  f ( x, y, z ) is a composite function, where x  x(t ) , y  y (t ) , z  z (t ) then

du u x u y u z
  
dt x t y t z t
If z  f ( x, y ) is a composite function, where x  x(u, v) , y  y (u , v) then

60
z z x z y z z x z y
  ,  
u x u y u v x v y v

Jacobian
If u  f ( x, y ) and v   ( x, y ) be two continuous functions of the independent variables x and y such that
u x , u y , vx , v y are also continuous in x and y

ux u y
The Jacobian of u , v with respect to x, y is defined as J 
vx u y

Notation

 u, v   (u, v)
J 
 x, y   ( x, y )
Similarly the Jacobian of u , v, w with respective to x, y, z is defined as

ux u y uz
(u, v, w)  u , v, w 
 ( x, y , z )
=J   vx v y vz
 x , y , z 
wx wy wz

Important application of Jacobian is in connection with the change of variables in multiple integrals
Properties of the Jacobian
If J1 is Jacobian of u , v with respect to x, y and J 2 is Jacobian of x, y with respect to u , v then J1 J 2  1

(u, v) (u, v) ( x, y)
If u , v are functions of x, y when x, y are functions of r , s then  
(r , s) ( x, y) (r , s)

 (u, v, w)
If u , v, w are functions of x, y, z and u , v, w are not independent(dependent) then 0
 ( x, y , z )

Gradient vector
𝜕𝑓
𝜕𝑥
The vector ∇𝑓 = [𝜕𝑓 ] represents the Gradient vector associated with the function 𝑧 = 𝑓(𝑥, 𝑦).At each point
𝜕𝑦
P in its domain, f increases most rapidly in the direction of the gradient vector ∇𝑓 at P. Geometrically, the
gradient vector represents the surface normal vector to a given surface.

61
∇𝑓

z = f(x,y)

Hessian matrix
The Hessian matrix is the Jacobian matrix of second order partial derivatives of a function.
The determinant of the Hessian matrix is also referred to as the Hessian.
𝜕 2𝑓 𝜕 2𝑓
𝜕𝑥 2 𝜕𝑥𝜕𝑦 𝑟 𝑠
For a two variable function, the Hessian matrix defined by 𝐻 = ( 𝜕2𝑓 )≡( )
𝜕 2𝑓 𝑠 𝑡
𝜕𝑥𝜕𝑦 𝜕𝑦 2

The Hessian matrix is a symmetric matrix.


Taylor’s theorem for functions of several variables
One can express a continuously differentiable function of several orders in a Taylor’s series as
1
𝑓 (𝑥, 𝑦) = 𝑓 (𝒂) + (𝒙 − 𝒂)𝑻 𝛁𝒇(𝑎) + (𝒙 − 𝒂)𝑇 𝑯𝒇(𝒂)(𝒙 − 𝒂) + . . .
2!
The Taylor’s theorem holds the key to device rules to determine optimum values (maximum and
minimum values) of functions of several variables.

Function Optimization
(A) Unconstrained optimization using the Hessian matrix
Second derivative test for a function of two variables:
Let a function 𝑓(𝑥, 𝑦) be continuous and possess first and second order partial derivatives at a point
𝑝(𝑎, 𝑏) where 𝑓𝑥 (𝑎, 𝑏) = 0 and 𝑓𝑦 (𝑎, 𝑏) = 0 (i.e., (a,b) is a critical point of f)

Let H denote “Hessian” matrix of second partial derivatives


𝑓𝑥𝑥 𝑓𝑥𝑦
𝐻=[ ] (symmetric matrix)
𝑓𝑦𝑥 𝑓𝑦𝑦
𝑓𝑥𝑥 𝑓𝑥𝑦
And Let 𝐷1 = 𝑓𝑥𝑥 and 𝐷2 = det 𝐻 = | |
𝑓𝑦𝑥 𝑓𝑦𝑦

(a) If 𝐷1 (𝑎, 𝑏) > 0 and 𝐷2 (𝑎, 𝑏) > 0 then H is positive definite and f has a relative minimum at (a,b)
62
(b) If 𝐷1 (𝑎, 𝑏) < 0 and 𝐷2 (𝑎, 𝑏) > 0 then H is negative definite and f has a relative maximum at (a,b)
(c) If 𝐷2 (𝑎, 𝑏) < 0 then H is indefinite and f has a saddle point (a,b)

Second derivative test for a function of three variables:


Suppose that the second partial derivatives of 𝑓: 𝑅3 → 𝑅 are continuous on a ball with center (a,b,c)
where 𝑓𝑥 (𝑎, 𝑏, 𝑐 ) = 0, 𝑓𝑦 (𝑎, 𝑏, 𝑐 ) = 0 and 𝑓𝑥 (𝑎, 𝑏, 𝑐 ) = 0 (i.e., (a,b,c) is a critical point of f)
𝑓𝑥𝑥 𝑓𝑥𝑦 𝑓𝑥𝑧
𝐿𝑒𝑡 𝐻 = [𝑓𝑦𝑥 𝑓𝑦𝑦 𝑓𝑦𝑧 ]
𝑓𝑧𝑥 𝑓𝑧𝑦 𝑓𝑧𝑧
𝑓𝑥𝑥 𝑓𝑥𝑦 𝑓𝑥𝑧
𝑓𝑥𝑥 𝑓𝑥𝑦
And 𝐷1 = 𝑓𝑥𝑥 , 𝐷2 = | | & 𝐷3 = |𝑓𝑦𝑥 𝑓𝑦𝑦 𝑓𝑦𝑧 |
𝑓𝑦𝑥 𝑓𝑦𝑦
𝑓𝑧𝑥 𝑓𝑧𝑦 𝑓𝑧𝑧

(a) If 𝐷1 > 0 , 𝐷2 > 0 & 𝐷3 > 0 then H is positive definite and f has a relative minimum at (a,b,c) .
(b) If 𝐷1 < 0 , 𝐷2 > 0 & 𝐷3 < 0 then H is negative definite and f has a relative maximum at (a,b,c).
(c) If any other case where 𝐷3 ≠ 0 then H is indefinite and f has saddle point at (a,b,c) .

(B) Constrained optimization


Lagrange method of multipliers

Let f(x,y,z) be a function of three variables connected by the relation  ( x, y, z )  0

F  f ( x, y, z )    ( x, y, z ) ---------------(1)

The necessary conditions to have stationary values


Fx  0 , Fy  0, Fz  0

f 
ie  0 ---------------------(2)
x x

63
f 
  0 ----------------------(3)
y y

f 
  0 -----------------------(4)
z z
On solving (1),(2),(3) and (4) we can find the volues of x,y,z and  for which f(x,y,z) has stationary
value.

Short Answer Questions

 2u  2u  2u
1).If u 
1
, x 2
 y 2
 z 2
 0 then show that 2  2  2  0
x2  y 2  z 2 x y z

1
1 2 2 2  32
 
 3
Solution:Given that u  ( x  y  z ) then ux     , ux    x  y  z  x
2 2
2 2 2 2 x y z (2 x ) 2 2

2
5
Similarly uxx   x 2  y 2  z 2   2x  y2  z2 
 2
2

5
u yy   x 2  y 2  z 2  2y  x2  z 2 
 2
2

5
u zz   x  y  z
2 2
  2z
2 2 2
 y 2  x2 

Then we get u xx  u yy  u zz  0

( x y) then show that  u   u


2 2
1
2). If u  tan
xy yx

u 1 1 y
Solution: Given that u  tan 1 ( x y) then  
x y x
2
x  y2
2

1  
 y
Similarly we obtain

u x  2u x2  y 2  2u x2  y 2  2u  2u
 2 2 ,  ,   
y x y yx  x 2  y 2 2 xy  x 2  y 2 2 xy yx

2 z 2  z
2
3). If z  f ( x  ay )   ( x  ay ) prove that  a
y 2 x2

64
Solution: Given that z  f ( x  ay )   ( x  ay ) then we get

z x  f ( x  ay )   ( x  ay )
z xx  f ( x  ay )   ( x  ay )
z y  a f ( x  ay )  a  ( x  ay )
z yy  a 2  f ( x  ay )   ( x  ay ) 

2 z 2  z
2
Clearly we have  a
y 2 x2
1 x 
4). If x  r cos  , y  sin  show that r
r  x

x
Solution: Given that x  r cos  , y  sin  then  r sin  ,

  r sin 

x x
 tan 1 ( x y )  =  2 = 
r
y
r2

 sin  1 x 
= . Therefore r . Hence proved
x r r  x

(u, v, w)
5) If u  x 2  2 y, v  x  y  z, w  x  2 y  3z then find
 ( x, y , z )
u x  2 x, u y  2, uz  0
Solution: We have u  x  2 y, v  x  y  z, w  x  2 y  3z then vx  1,
2
v y  1, vz  1
wx  1, wy  2, wz  3

ux u y uz 2 x  2 0
(u, v, w)
 v x v y vz  1 1 1 = 10x  4
 ( x, y , z )
wx wy wz 1 2 3

x y
6) Prove that u  , v  tan 1 x  tan 1 y are functionally dependent.
1  xy

x y
, we have ux  1  y 2 , u y  1  x 2
2 2
Solution: Given u 
1  xy 1  xy  1  xy 
1
From v  tan 1 x  tan 1 y , we have vx  1
, vy 
1 x 2
1 y2

65
1 y2 1  x2
ux u y
1  xy  1  xy 
2 2
We have J   0
vx u y 1 1
1  x2 1 y2

Therefore, u and vare functionally dependent.


𝑥+𝑦
From the relation 𝑣 = 𝑡𝑎𝑛−1 𝑥 + 𝑡𝑎𝑛 −1 𝑦 ≡ 𝑡𝑎𝑛−1 (1−𝑥𝑦) = 𝑡𝑎𝑛−1 𝑢

The functional relationship is therefore 𝑢 = 𝑡𝑎𝑛𝑣

 ( x, y , z )
7). If u  x  y  z, uv  y  z, uvw  z then prove that = u 2v
(u, v, w)
Solution: Given that u  x  y  z, uv  y  z, uvw  z

z  uvw , now uv  y  z . Then y  uv  uvw and x  u  uv . We


xu  1  v, xv  u, xw  0
get yu  v  vw, yv  u  uw, yw  uv
zu  vw, zv  uw, zw  uv

xu xv xw 1 v  u 0
 ( x, y , z )
Then  yu yv yw  v  vw u  uw  uv  u 2 v .
(u, v, w)
wu wv ww vw uw uv

8). If x  uv , y
u
then show that JJ   1
v

x x y 1 y
Solution: Given that x  uv , y
u
then  v, u ,  ,
u
  2 Now
v u v u v v v

u v
xu yu 2u
J  1 u 
xv yv  2 v
v v

u 2  xy and v 2 
x u y u  x v 1 v

x
,  2
But we have also  , ,
y x 2u y 2u x 2vy y 2vy
y x
ux u y 2u 2u v .
J    Therefore JJ   1 .
vx uy 1 x 2u
 2
2vy 2vy

66
9) Find the stationary values by Lagrange’s method of multipliers for f  x  y  z and   xyz  a3
Sol ution: Given that f ( x, y, z )  x  y  z ,  ( x, y, z )  xyz  a3

Construct theLagrangean function F ( x, y, z )  f ( x, y, z )    ( x, y, z )

F F F
We set   0
x y z

F f  1
   0  1   yz  0    
x x x yz

1 1
Similarly we get    and   
xz xy

By eliminating 𝜆, we get the critical relation x  y  z

By substituting this relation in  ( x, y, z )  0

We get x  a , y  a , z  a . Therefore the critical point is (a, a, a)

Therefore the minimum value of f ( x, y, z )  x  y  z at (a, a, a) is

𝑓(𝑎, 𝑎, 𝑎) = 𝑎 + 𝑎 + 𝑎 ≡ 3𝑎
 ( x, y )  (r , )
10) If x  r cos  , y  r sin  , find the Jacobian J  and J   and hence show that
 (r ,  )  ( x, y )
JJ   1
x y x y
Solution: Given x  r cos  , y  r sin  . Then  cos  ,  sin  ,   r sin  ,  r cos 
r r  

x x
r  cos   r sin 
J   r cos 2   r sin 2   r
y y sin  r cos 
r 

r r
x y
Now J  
 
x y

r x r y  y  x
 ,  ,  2 ,  2
x x y
2 2 y x y
2 2 x x y 2
y x  y 2

67
x y
x y
2 2
x  y2
2
1 1
J    JJ   r   1
y x r r

x  y2
2
x  y2
2

x y xy  (u , v )
11) If u  ,v then find are u and v are functionally related ?
x y ( x  y) 2
 ( x, y )

x y xy  (u, v) ux u y
Solution: Given that u  ,v then 
x y ( x  y)2  ( x, y ) vx u y

2y 2x y( x  y) x( x  y )
ux   , uy  , vx   , vy 
( x  y) 2
( x  y) 2
( x  y) 3
( x  y )3

2y 2x

( x  y) ( x  y)2
2

J  0 . Therefore u, v are functionally dependent or related.


y ( x  y ) x( x  y )

( x  y )3 ( x  y )3

12) Among the points (6,0) and (5,1) , which of them is a saddle point for the function

f ( x, y)  x 3  3xy2  15x 2  15 y 2  72 x ?

Sol : Points of optimum (maximum, minimum or saddle points) can be detected by


analyzing the Hessian matrix for its definiteness.

 6 x  30 6 y   0  6
In this case, H     at (5,  1)
 6y 6 x  30    6 0 

Since H  0 , H is indefinite and hence (5, -1) is a saddle point.

Long Answer Questions


2 z
1. If x x y y z z  c showthat  ( x log ex)1 if x  y  z.
xy

Solution :Given x x y y z z  c

Taking log on both sides we get x log x  y log y  z log z  c ----------(1)

68
z z 1  log x
Differntiate w.r.to x , we get (1  log x)  (log z  1) 0  ----(2)
x x 1  log z

z 1  log y
Similarly Differentiate w.r.to y, we get  ---------------(3)
y 1  log z

2 z  z  1  log y
Now Differentiate equation (3) w.r.to x we get  [ ]  [ ]
xy x y x 1  log z

1 1 z
= (1  log y )
(1  log z ) z x
2

z 1  log y 1  log x
Putting the value of we get= ( )
x z (1  log z ) 2
1  log z

(1  log x)(1  log y)


=
z (1  log z )3

Now at x=y=z

2 z (1  log x)(1  log x)



xy z (1  log x)3

1 1 1
= = =
x(1  log x) x(log e  log x) x(log ex)

=  x(log ex)1 .

u u u
2. If 𝒖 = 𝒇(𝒆𝒚−𝒛 , 𝒆𝒛−𝒙 , 𝒆𝒙−𝒚 )then prove that    0.
x y z

Sol ution: Given that

u  f (e y  z , e z  x , e x  y ) ------------(1)

Let X  e y  z , Y  e z  x , Z  e x  y

By Chain Rule
u u X u Y u Z
  
x X x Y x Z x
u u u x  y
= (0)  ( e z  x )  (e )
X Y Z
u u x  y
= ( e z  x )  (e ) -----------(2)
Y Z

69
u u y  z u
 (e )  (e x  y ) ------------(3)
y X Z

u u u z  x
 ( e y  z )  (e ) ---------------(4)
z X Y
Adding equations (2),(3) and (4)we get
u u u u z  x u x  y u y  z u x  y u y  z u z  x
  = (e )  (e )  (e )  (e )  (e )  (e )
x y z Y Z X Z X Y

=0
u u u
Therefore   0 .
x y z

u u 2  u
2
 2u 2  u
2
u  2u
3. If 𝒙 = 𝒓𝒄𝒐𝒔𝜽, 𝒚 = 𝒓𝒔𝒊𝒏𝜽 prove that x  y  y  2 xy x r  2 .
x y x 2 xy y 2 r 

Solution:Given that x  r cos  , y  r sin 

u u x u y
 
r x r y r

u u
= cos   sin 
x y

u u x u y u u
  =  r sin   r cos 
 x  y  x y

  
 r sin   r cos 
Let  x y

 2u  u
 [ ]
 2
 
  u u
= (r sin   r cos  ) (r sin   r cos  )
x y x y

 2u  2u  2u
= r sin  2  2r sin  cos 
2 2 2
 r cos  2 -------(2)
2 2

x xy y
Now

u  2u u u  2u  2u  2u
r  2  r[cos   sin  ]  r 2 sin 2  2  2r 2 sin  cos   r 2 cos 2  2
r  x y x xy y

70
u u 2 2  2u  2u  2u
= r cos   r sin   r sin  2  2r sin  cos 
2
 r cos  2
2 2

x y x xy y

u u 2  u
2
 2u 2  u
2
=x  y  y  2 xy x
x y x 2 xy y 2
Hence proved.

4. A function f(x,y)is written in terms of a new variables u  e x cos y , v  e x sin y then show that

f f f f f f
u v ,  v  u
x u v y u v

Solution:Given that u  e x cos y , v  e x sin y  u x  e x cos y vx  e x sin y

u y  ex sin y vy  e x cos y

We have
f f u f v
  f f f f
x u x v x = u  v =u  v ----------(1)
u v u v

f f u f v f f f f
Similarly   = (v)  (u ) = v  u ------------(2)
y u y v y u v u v

Therefore
f f f
u v
x u v
f f f
 v  u
y u v

5. Show that the dependent variables in the following transformation are functionally dependent
and also establish the relation.
u  xe y sin z, v  xe y cos z, w  x 2 e 2 y

Solution:Given that u  xe y sin z, v  xe y cos z, w  x 2 e 2 y

ux vx wx e y sin z e y cos z 2 xe2 y


 (u , v, w)
J  uy vy w y  xe y sin z xe y cos z 2 x 2 e 2 y
 ( x, y , z )
uz vz wz xe y cos z  xe y sin z 0

71
sin z cos z 2x
e 4y
x sin z x cos z 2 x 2  e4 y (0)  0
x cos z  x sin z 0

Hence u and v are functionally related. The relation between them can be found as
follows

u 2  v 2  x 2 e 2 y (1)  w

6. Divide 24 into 3 parts such that the continued product of the first ,the square of the second
and cube of the third may be maximum.

Sol ution:Given f  xy 2 z 3 and   x  y  z  24

We know that F  f ( x, y, z )   ( x, y, z )

F xy 2 z 3   ( x  y  z  24) ------------------(1)

The necessary conditions to get Stationary Points

Fx  0  y 2 z 3    0 =>    y 2 z 3 ------------(2)

Fy  0  2 xyz 3    0 =>   2xyz 3 -----------(3)

Fz  0  3xy 2 z 2    0 =>   3xy 2 z 2 ----------(4)

On solving (2),(3) and (4) we get x=4 , y=8 and z=12

The maximum = 4(8)2 (12)3 = 884736.

7. The temperature T at any point (x,y,z) in space is T  400 xyz 2 .Find the highest temperature
on the surface of the unit sphere 𝒙𝟐 + 𝒚𝟐 + 𝒛𝟐 = 𝟏

Solution: Given 𝑓 ≡ 𝑇 = 400𝑥𝑦𝑧 2 and   x2  y 2  z 2  1

We know that F  f ( x, y, z )   ( x, y, z )

F  400 xyz 2   ( x 2  y 2  z 2  1) -----------------(1)

200 yz 2
Now Fx  0  400 yz 2  2 x  0 =>    -------------(2)
x
72
200 xz 2
Fy  0  400 xz  2 y  0
2
 --------------(3)
y

Fz  0  800 xyz  2 z  0 =>   400xy ----------------(4)

If (2)=(3) => x=y and (3) =(4) => z  2 x , We have x2  y 2  z 2  1

x2  x2  2x2  1
4x2  1
 1
x
2

1 1 2
y and z  . Required Maximum =. 400(1⁄2)(1⁄2)(1⁄√2) ≡ 50
2 2

8. Find the absolute Maximum and Minimum values of f ( x, y)  2  2 x  2 y  x 2  y 2 on a


triangular plane in the first quadrant, bounded by the lines x=0,y=0 and y=9-x.

Solution:Given that f ( x, y)  2  2 x  2 y  x 2  y 2 
f x  2  2 x, f y  2  2 y, f xx  2, f xy  0 and f yy  2

For Maxima –Minima


f x  0  2  2 x  0  x  1 , f y  0  2  2 y  0  y  1

At A(1,1) D2  H  4  0 and D= -2<0

Hence f(x,y) is attains maximum at(1,1). . Maximum value =4.


9. Find the relative maximum and minimum values of the function
𝒇(𝒙, 𝒚) = 𝟑𝒙𝟐 𝒚 + 𝒚𝟑 − 𝟑𝒙𝟐 − 𝟑𝒚𝟐 + 𝟏
Solution:
We have 𝑓𝑥 = 6𝑥𝑦 − 6𝑥 = 0 & 𝑓𝑦 = 3𝑥 2 + 3𝑦 2 − 6𝑦 = 0

This yields four critical points (0,0) ,(0,2), (1,1) ,(-1,1)


We compute the matrix of second order partial derivatives
6𝑦 − 6 6𝑥
𝐻=[ ]
6𝑥 6𝑦 − 6
−6 0
(a) At (0,0) , If 𝐷1 (0,0) = −6 < 0 and 𝐷2 (0,0) = [ ] = 36 > 0 then f has a relative
0 −6
maximum at (0,0). The maximum value is f(0,0) = 1.

73
−6 0
(b) At (0,2) , If 𝐷1 (0,2) = 6 > 0 and 𝐷2 (0,0) = [ ] = 36 > 0 then f has a relative minimum
0 −6
at (0,2). The minimum value is f(0,2) = -3.
0 ±6
(c) At (±1,1), 𝐷2 (±1,1) = [ ] = −36 < 0 then f has a saddle points at (±1,1).
±6 0

10.Examine f ( x, y)  x3  y3  3axy for maximum and minimum values?

Solution: Given that

f ( x, y)  x3  y3  3axy ---------------(1)

Now f x  3x2  3ay , f y  3 y 2  3ax, f xx  6x, f xy  3a and f yy  6 y

For Maxima and Minima

x2
f x  0  3x 2  3ay  0  y  ---------(2)
a

f y  0  3 y 2  3ax  0  y  ax -----------(3)

Solve (2) and (3) we get x=0,a and y=0.a


(0,0) (a,a)
r 0 6a
s -3a -3a
t 0 6a

rt  s 2 9a 2 27a 2

At (0,0) there is no extreme value. It is a saddle point.

At (a,a) D1  0, D2  0 i.e,

Hence H is a positive definite and f attains minimum value.


The minimum value of the function is 𝑓(𝑎, 𝑎) = 27𝑎3

11.In a Plane triangle ABC, find the maximumvalue of cosAcosBcosC

Solution:Given that cos A cos B cos C  cos A cos B cos(  ( A  B))

f ( A, B)   cos A cos B cos( A  B)

74
f A  cos B sin(2 A  B)

f B  cos A sin( A  2B)

2 f
r  2cos B cos(2 A  B)
A2

2 f
s  cos(2 A  2 B)
AB

2 f
t  2 cos A cos( A  2 B)
B 2
For Maxima and Minima
f f
 0 and 0
A B
cos B sin(2 A  B)  0 ---------------(2)

𝑐𝑜𝑠𝐴 sin(𝐴 + 2𝐵) = 0---------------(3)


Solve equations (2) and (3) we get 𝐴 = 𝐵 = 𝜋⁄3
  3
At ( , ) D1  1, , D2  H   0
3 3 4
 
D1  0, D2  H  0 . f attains Maximum at ( , ).
3 3
1
Required Maximum Value = .
8

12. Find and classify the critical points of the function 𝒇(𝒙, 𝒚, 𝒛) = 𝒙𝟐 + 𝒚𝟐 + 𝟕𝒛𝟐 − 𝒙𝒚 − 𝟑𝒚𝒛
Solution:
We have 𝑓𝑥 = 2𝑥 − 𝑦 = 0, 𝑓𝑦 = 2𝑦 − 𝑥 − 3𝑧 = 0 & 𝑓𝑧 = 14𝑧 − 3𝑦 = 0

This yields exactly one critical point (0,0,0)


2 −1 0
At (0,0,0): 𝐻 = [−1 2 −3]
0 −3 14
2 −1 0
2 −1
Then 𝐷1 = 2 > 0 , 𝐷2 = | | = 3 > 0 & 𝐷3 = |−1 2 −3| = 24 > 0,
−1 2
0 −3 14

Hence H is a positive definite matrix and f has a relative minimum at (0,0,0)

75
The minimum value of the function is 𝑓(0,0,0) = 0

13.Find the dimensions of rectangular box of maximum capacity whose surface area is given
when box is closed.

Solution:Let x,y,z be the length,breadth and height of the rectangular box respectively.
V  xyz
Vx  yz ,Vy  xz ,Vz  yx
Surface Area S  2( xy  yz  xz )

By Lagrange’s Method
V s
 0  yz  2( y  z )  0 ------------(1)
x x
V s
 0  xz  2( x  z )  0 ------------(2)
y y

V s
 0  xy  2( x  y )  0 -------------(3)
z z
Solve equations (1),(2) and (3)
We get x=y=z .Thus length = Breadth=Height

The Maximum Volume is x 3 .


14. Use the Method of Lagrange’s multipliers to find the extreme values of
f ( x, y, z )  2 x  3 y  z subject to x 2  y 2  5 and x  z  1 .

Solution:
f ( x, y , z )  2 x  3 y  z
Given  ( x, y, z )  x 2  y 2  5
 ( x, y , z )  x  z  1
By Lagranges Method
f  
  0  2  2 x    0 -----------(1)
x x x
f  
  0  3  2 y  0 ------------(2)
y y y

76
f  
  0  1   (0)    0 -------------(3)
z z z
Solve equations (1) ,(2) and (3) we get
1
  1,   
2
1
x
2
3
y
2
1
z  1
2
1 3 1 1 3 1
The extreme points are ( , ,1  ) and ( , ,1  )
2 2 2 2 2 2

The Extreme Values are 1  5 2 ,1  5 2 .

x2 y 2
15. Find the area of a greatest rectangle that can be inscribed in an ellipse 2  2  1 .
a b

Solution: Let ABCD be the rectangle .Let the coordinates of the point A(x, y)
AB=2x and BC= 2y , Area = A=(2x)(2y) =4xy

A A x2 y 2  2 x  2 y
 4y ,  4 x ,   2  2  1,  2 and 
x y a b x a y b2
By Lagranges Method

A  2x 2 ya 2
  0  4 y   2  0    -----------(1)
x x a x

A  2y 2 xb2
  0  4 x   2  0    -------------(2)
y y b y
Solve equations (1) and (2) we get
a b
x and y  
2 2
a b
Required Area = 4*( )*( )  2ab
2 2

77
Hence area of greatest rectangle inscribed in the ellipse = 2ab.

16. A rectangular box open at the top is to be designed to have a fixed capacity 4000 cft.
Determine its dimensions such that its surface area is a minimum using Lagrange’s Multipliers
Method.

Solution: Choose the dimensions of the box as x, y and z so that its volume and surface area are
respectively xyz and xy  2 yz  2 zx

The problem is to minimize xy  2 yz  2 zx subject to xyz  4000

The Lagrangian function is L( x, y, z,  )  ( xy  2 yx  2zx)   xyz  4000


From Lx  0, L y  0, Lz  0 we get the critical relation x  y  2 z

Substituting in the equation xyz  4000 we get the critical point 20,20,10

This gives a minimum value 1200 for the surface area.

Exercise
1. If x 2  y 2  z 2  u 2 , Prove that u xx  u yy  u zz  0 .
2. If x  er cos  , y  er sin  then show that uxx  u yy  e2r [urr  u ] .
x y u u
3. If u  sin 1 ( )  ta n 1 ( ) ,Then prove that x  y  0.
y x x y
4. If x  vw , y  uw , z  uv and u  r sin  cos  , v  r sin  sin  , w  r cos  Find
r 2 sin 
 ( x, y , z )
? [Ans : 4 ]
 (r , ,  )
78
5. Find J , J ' for x  ev sec u, y  ev tan u and hence show that JJ '  1 .
1
[𝑨𝒏𝒔: 𝐽 = −𝑥 𝑒 𝑦 , 𝐽′ = − 𝑥𝑒 𝑦]
6. Show that the functions u  x  y  z , v  x 2  y 2  z 2  2 xy  2 yz  2 xz ,
w  x3  y3  z 3  3xyz are functionally related and hence find the relation between them.
[Ans : 4w  u 3  3uv ]
7. Find the maximum and minimum distance from the origin to the circle
5 x 2 6 xy  5 y 2  8  0 . [Ans :4,1]
8. Locate the stationary points and examine their nature of the following function
x 4 4 xy  y 4  2 x 2  2 y 2
9. Find the extrema of the following functions
(a) 3𝑦 2 − 2𝑦 3 − 3𝑥 2 + 6𝑥𝑦
1 1
(b) 𝑥 + 𝑥𝑦 + 𝑦
(c) x4  y 4  z 4  4 xyz

10. Find the absolute Maximum and Minimum values for the
following functions in the closed region R

(a) f ( x, y)  x3  y 3  xy ,R: x=1, y=0 and y=2x.

1
( 𝑀𝑎𝑥 𝑣𝑎𝑙. = 7 , 𝑀𝑖𝑛 𝑣𝑎𝑙 = − 27)

(b) f ( x, y )  cos x  cos y  cos( x  y ) , R: 0  x   , 0  y  

3
(𝑀𝑎𝑥 = 3 , 𝑀𝑖𝑛 𝑣𝑎𝑙 = − 2)

11. A rectangular box open at the top has constant surface area 108 sq.ft.Find its dimensions
such that its volume is maximum. [𝑨𝒏𝒔: 108 𝑐𝑓𝑡 𝑎𝑡 (6,6,3)]
12.The sum of three positive integers is 12.Find the maximum of the product of the first,
square of the second and the cube of the third. [𝑴𝒂𝒙 = 6912 𝑎𝑡 (2,4,6)]
13.Find the volume of the largest rectangular parallepiped that can be inscribed in the
ellipsoid of the revolution 4 x 2  4 y 2  9 z 2  36 .
[𝑴𝒂𝒙 𝒗𝒂𝒍. = 16√3 ]
14.The temperature at a point (x, y) on a metal plate is 𝑇(𝑥, 𝑦) = 4𝑥 2 − 4𝑥𝑦 + 𝑦 2 . An ant
on the plate walks around the circle of radius 5 centered at the origin, What are the highest
and lowest temperatures encountered by the ant?
15.Find the minimum value of the function 𝑓(𝑥, 𝑦, 𝑧) = 𝑥 2 + 𝑦 2 + 𝑧 2 subject to the
constraints 𝑥 + 2𝑦 + 3𝑧 = 6 𝑎𝑛𝑑 𝑥 + 3𝑦 + 9𝑧 = 9

79
UNIT – V
FUNCTION APPROXIMATION TOOLS IN ENGINEERING
Definitions:
Continuity at a point : A function f(x) is said to be continuous at x = a if lim+ 𝑓(𝑥) = lim− 𝑓(𝑥) = f(a).
𝑥→𝑎 𝑥→𝑎

Continuity in the interval: Afunction f(x) is said to be continuous in the interval [a,b] if f(x) is
continuous at every point c ϵ (a,b) i.e. lim 𝑓(𝑥) = f(c) and lim+ 𝑓(𝑥) = f(a) and lim− 𝑓(𝑥) = f(b).
𝑥→𝑐 𝑥→𝑎 𝑥→𝑏

Geometrically, if f(x) is continuous in [a,b] ,the graph of y = f(x) is a continuous curve for the points x in
[a,b].
𝑓(𝑥)−𝑓(𝑎) 𝑓(𝑥)−𝑓(𝑎)
Derivability at a point : A function f(x) is derivable at x = a if lim+ 𝑥−𝑎
= lim− 𝑥−𝑎
exists
𝑥→𝑎 𝑥→𝑎
and it is denoted by 𝑓 ′ (𝑎).

Derivability in the interval : Afunction f(x) is said to be derivable in the interval [a,b] if f(x) is
𝑓(𝑥)−𝑓(𝑐) 𝑓(𝑥)−𝑓(𝑎) 𝑓(𝑥)−𝑓(𝑏)
derivable at every point c ϵ (a,b) i.e. lim− exists and lim+ and lim− exists.
𝑥→𝑐 𝑥−𝑐 𝑥→𝑎 𝑥−𝑎 𝑥→𝑏 𝑥−𝑏
Geometrically, if f(x) is derivable in [a,b] then there exist a unique tangent to the curve at every point in
the interval.
Note:1. If 𝑓 ′ (𝑥 )> 0 then f(x) is an increasing function as x increases.
2. If 𝑓 ′ (𝑥)< 0 then f(x) is a decreasing function as x increases.
3. 𝑒 𝑥 , sin x , cos x are continuous and derivable everywhere.
4. log x is continuous and derivable in [1,∞].
5. Every polynomial function is continuous and derivable everywhere.
6. If f(x) and g(x) are continuous functions then f(x) + g(x) , f(x) – g(x) , f(x).g(x) are also
continuous
𝑓(𝑥)
And is continuous if 𝑔′ (𝑥 )≠0.
𝑔(𝑥)

Generalized mean value theorems:


Taylor’s Theorem:
If f : [a,x]→R is such that i)𝑓 (𝑛−1) is continuous in [a,x] ii) 𝑓 (𝑛−1) is derivable in (a,x) and pϵ 𝑍 + then
𝑥−𝑎 (𝑥−𝑎)2 (𝑥−𝑎)𝑛−1
there exist a point c ϵ (a,b) such that f(x) = f(a) + 𝑓 ′ (𝑎) + 𝑓 ′′ (𝑎) + .......+ 𝑓 (𝑛−1) (𝑎) +
1! 2! (𝑛−1)!
𝑅𝑛 ....(1)
(𝑥−𝑎)𝑛
Where 𝑅𝑛 = 𝑓 (𝑛) (𝑎) is called Lagrange’s form of remainder after n terms and if 𝑅𝑛 → 0 as n→ ∞
𝑛!
then (1) is called Taylor’s series expansion of f(x) about a point x= a.

80
Another form of Taylor’s Theorem:
If f : [a,a+h]→R is such that i)𝑓 (𝑛−1) is continuous in [a,a+h] ii) 𝑓 (𝑛−1)is derivable in (a,a+h) and pϵ 𝑍 +
then there exist a real number θ ϵ (0,1) such that
ℎ ℎ2 ℎ 𝑛−1 ℎ𝑛
f(a+h) = f(a) + 1! 𝑓 ′ (𝑎) + 𝑓 ′′ (𝑎) + .......+ (𝑛−1)! 𝑓 (𝑛−1) (𝑎) + 𝑅𝑛 ....(1) , where 𝑅𝑛 = 𝑓 (𝑛) (𝑎 + 𝜃ℎ)
2! 𝑛!
is called Lagrange’s form of remainder after n terms

Maclaurin’s Series: A Taylor’s series expansion of f(x) about a point x=0 is called Maclaurin’s Series
𝑥 𝑥2 𝑥 𝑛−1
of f(x) i.e. Maclaurin’s Series of f(x) is f(x) = f(0) + 1! 𝑓 ′ (0) + 𝑓 ′′ (0) + .......+ (𝑛−1)! 𝑓 (𝑛−1) (0) + ......
2!

Short Answer Questions :

1 )Obtain Maclaurin’s series expansion of f(x) = log (1 + x).

Sol: Given f(x) = log (1+x) ⇒ f(0) = log 1 = 0


1
𝑓 ′ (𝑥 ) = 1+𝑥⇒𝑓 ′ (0) = 1
1
𝑓 ′′ (𝑥 ) = − ⇒𝑓 ′′ (0) = -1
(1+𝑥)2

2
𝑓 ′′′ (𝑥) = (1+𝑥)3⇒𝑓 ′′′ (0) = 2

6
𝑓 ′𝑣 (𝑥) = − (1+𝑥)4⇒𝑓 ′𝑣 (0) = -6 etc.

∴ The Maclauri’s series expansion of f(x) is given by


𝑥 𝑥2 𝑥3 𝑥4
f(x) = f(0) + 1! 𝑓 ′ (0) + 𝑓 ′′ (0) + 3! 𝑓 ′′′ (0) + 𝑓 ′𝑣 (0) + ...........
2! 4!

𝑥2 𝑥3 𝑥4
⇒ log (1+x) = x - + - + ........
2 3 4
𝟓
2) Write the Taylor’s series for f(x) = (𝟏 − 𝒙)𝟐 with Lagrange’s form of remainder upto
3 terms in the interval [0,1].
5
Sol: Given f(x) = (1 − 𝑥)2
It is clear that 𝑓 ′ (𝑥 ), 𝑓 ′′ (𝑥 ) are continuous and 𝑓 ′′′ (𝑥) is derivable in (0,1).Thus f(x)
satisfies the conditions of Taylor’s theorem.
∴ The Taylor’s series for f(x) in [0,x] is
𝑥 𝑥2 𝑥3
f(x) = f(0) + 𝑓 ′ (0) + 𝑓 ′′ (0) + 𝑓 ′′′ (𝑐 ) ,c ∈ (a,b) ...... (1)
1! 2! 3!
5
Now f(x) = (1 − 𝑥)2⇒f(0) = 1
81
3
5 5
𝑓 ′ (𝑥 ) = − 2 (1 − 𝑥)2 ⇒𝑓 ′ (0) = − 2
1
15 15
𝑓 ′′ (𝑥 ) = (1 − 𝑥)2 ⇒𝑓 ′′ (0) =
4 4
1 1
15 15
𝑓 ′′′ (𝑥) =− (1 − 𝑥)−2 ⇒𝑓 ′′′ (𝑐 ) = − (1 − 𝑐)−2
8 8
1
5𝑥 15𝑥 2 15 𝑥 3
∴from (1) ⇒ f(x) = 1 - + - (1 − 𝑐)−2 + R
2 8 48

Long Answer questions:


𝟓
1. Verify Taylor’s theorem for f(x) = (𝟏 − 𝒙)𝟐 with Lagange’s form of remainder upto
2 terms in the interval [0,1].
5
Sol: Given f(x) = (1 − 𝑥)2
It is clear that 𝑓 ′ (𝑥 ), 𝑓 ′′ (𝑥 ) are continuous and 𝑓 ′′′ (𝑥) is derivable in(0,1).Thus f(x)
satisfies the conditions of Taylor’s theorem.
∴ The Taylor’s series for f(x) in [0,x] is
𝑥 𝑥2
f(x) = f(0) + 1! 𝑓 ′ (0) + 𝑓 ′′ (𝑐 ) ,c ∈ (a,b)...... (1)
2!
5
Now f(x) = (1 − 𝑥)2⇒ f(0) = 1
3
5 5
𝑓 ′ (𝑥 ) = − 2 (1 − 𝑥)2 ⇒𝑓 ′ (0) = − 2
1 1
15 15
𝑓 ′′ (𝑥 ) = (1 − 𝑥)2 ⇒𝑓 ′′ (𝑐 ) = (1 − 𝑐)2
4 4

Substitute these in (1)


1
5 1 15
Then 0 = 1 - 2 + 2! . (1 − 𝑐)2
4
1
⇒(1 − 𝑐)2 = 4/5
⇒ c = 9/25 ∈ (0,1) . ∴ The Taylor’s theorem is verified.

82
2. Obtain Maclaurin’s series expansion of f(x) = sin(m𝒔𝒊𝒏−𝟏 𝒙) , where m is a constant.

Sol : The Maclaurin’s series expansion of f(x) is given by


𝑥 𝑥2 𝑥3 𝑥4
f(x) = f(0) + 1! 𝑓 ′ (0) + 𝑓 ′′ (0) + 3! 𝑓 ′′′ (0) + 𝑓 ′𝑣 (0) + ...........(1)
2! 4!

Now y = f(x) = sin(m𝑠𝑖𝑛−1 𝑥)


𝑚
⇒𝑦1 = 𝑓 ′ (𝑥) = cos(m𝑠𝑖𝑛−1 𝑥) .................... (2)
√1− 𝑥 2

⇒√1 − 𝑥 2 𝑦1 = m cos(m𝑠𝑖𝑛−1 𝑥)
⇒ (1 − 𝑥 2 )𝑦1 2 = 𝑚2 𝑐𝑜𝑠 2 (𝑚𝑠𝑖𝑛−1 𝑥)
⇒ (1 − 𝑥 2 )𝑦1 2 = 𝑚2 [1 − 𝑠𝑖𝑛2 (𝑚𝑠𝑖𝑛−1 𝑥)]
⇒ (1 − 𝑥 2 )𝑦1 2 = 𝑚2 [1 − 𝑦 2 ]
Differentiating w.r.t ‘x’ , we get
2 (1 - 𝑥 2 )𝑦1 𝑦2 - 2x 𝑦1 2 = -2𝑚2 𝑦𝑦1
⇒ (1 - 𝑥 2 )𝑦2 - x𝑦1 + 𝑚2 y = 0 , 𝑦1 ≠ 0 ............ (3)
Differentiating (3) w.r.t ‘x’ for ‘n’ times using Leibnitz rule , we get
𝑛(𝑛−1)
(1 - 𝑥 2 )𝑦𝑛+2 +n(-2x)𝑦𝑛+1 + (-2)𝑦𝑛 - (x𝑦𝑛+1 + 𝑛𝑦𝑛 ) + 𝑚2 𝑦𝑛 = 0
2!

⇒ (1 – 𝑥 2 )𝑦𝑛+2 – (2nx + x )𝑦𝑛+1 + (-n(n-1)-n + 𝑚2 )𝑦𝑛 = 0


⇒ (1 – 𝑥 2 )𝑦𝑛+2 – (2n + 1)x𝑦𝑛+1 + ( 𝑚2 − 𝑛2 )𝑦𝑛 = 0 ...............(4)
Put x = 0 in (4)
∴ 𝑦𝑛+2 (0) + ( 𝑚2 − 𝑛2 )𝑦𝑛 (0) = 0 , n = 0,1,2,3..............(5)
Now f(0) = 𝑦0 (0) = =sin(m𝑠𝑖𝑛−1 0) = 0
𝑓 ′ (0) = 𝑦1 (0) = m cos(m𝑠𝑖𝑛−1 0) = m
𝑓 ′′ (0) = 𝑦2 (0) = −𝑦0 (0) = 0
𝑓 ′′′ (0) = 𝑦3 (0) = (12 -𝑚2 )𝑦1 (0) = m (12 -𝑚2 )
𝑓 ′𝑣 (0) = 𝑦4 (0) =(22 -𝑚2 )𝑦2 (0) = 0
𝑓 𝑣 (0) = 𝑦5 (0) =(32 -𝑚2 )𝑦3 (0) = m (12 -𝑚2 )(32 -𝑚2 ) etc.
Substitute these in (1)
𝑚(12 −𝑚 2 ) 𝑚(12 −𝑚 2)(32 −𝑚 2)
∴ f(x) = mx + 𝑥3 + 𝑥 5 + .................
3! 5!

83
3. Obtain the 4th degree Taylor’s polynomial approximation to f(x) = 𝒆𝟐𝒙 about x = 0.
Find the maximum error when 0 ≤ 𝒙 ≤ 0.5

Sol :Taylor’s series of f(x) = 𝑒 2𝑥 about x = 0 upto degree 4 is given by


𝑥 𝑥2 𝑥3 𝑥4
f(x) = f(0) + 1! 𝑓 ′ (0) + 𝑓 ′′ (0) + 3! 𝑓 ′′′ (0) + 𝑓 ′𝑣 (0) ............... (1)
2! 4!

Now f(x) = 𝑒 2𝑥 ⇒ f(0) = 𝑒 0 = 1


𝑓 ′ (𝑥) = 2𝑒 2𝑥 ⇒ 𝑓 ′ (0) = 2
𝑓 ′′ (𝑥) = 4𝑒 2𝑥 ⇒ 𝑓 ′′ (0) = 4
𝑓 ′′′ (𝑥) =8 𝑒 2𝑥 ⇒ 𝑓 ′′′ (0) = 8
𝑓 ′𝑣 (𝑥) = 16𝑒 2𝑥 ⇒𝑓 ′𝑣 (0) = 16
4 8 16
∴ From (1) 𝑒 2𝑥 = 1 + 2x +2! 𝑥 2 + 3! 𝑥 3 + 𝑥4
4!

4𝑥 3 2𝑥 4
= 1 + 2x + 2𝑥 2 + +
3 3

The error term is given by


𝑥5 32
𝑅5 (𝑥) = 𝑓 (5) (𝑐) = 𝑥 5 𝑒 2𝑐 , 0 < 𝑐 < 𝑥
5! 5!

32 𝑒
⇒ | 𝑅5 (𝑥)| ≤ 120 [ max 𝑥 5 ] [ max 𝑒 2𝑐 ] ≤ 120
0≤𝑥≤0.5 0≤𝑥≤0.5

𝒙𝟐 𝒙𝟐
4. Prove that if x > 0 , x - < log (1+x)< x- 𝟐(𝟏+𝒙)
𝟐

𝑥2
Sol : Let f(x) = x - − log (1+x)
2

1 −𝑥 2
⇒ 𝑓 ′ (𝑥) = 1-x - 1+𝑥 = 1+𝑥

Clearly 𝑓 ′ (𝑥) < 0 for x > 0


∴ f(x) is decreasing function for x > 0 i.e. f(x) <0
𝑥2
⇒x- − log (1+x) <0
2

𝑥2
⇒x- < log (1+x) ............. (1)
2

𝑥2 −𝑥 2
Again let g(x) = log (1+x) - x + ⟹ 𝑔′ =
2(1+𝑥) 2(1+𝑥)2

84
Clearly 𝑔′ (𝑥) < 0 for x > 0
∴ g(x) is decreasing function for x > 0 i.e. g(x) <0
𝑥2
⇒ log (1+x) - x- 2(1+𝑥) <0

𝑥2
⇒ log (1+x)< x- 2(1+𝑥) ..................... (2)

By combining (1) & (2) ,we get


𝑥2 𝑥2
x- < log (1+x)< x- . Hence the proof
2 2(1+𝑥)

𝟏 𝟏
5. Show that √𝒙 = 1 + 𝟐(x-1) - 𝟖 (𝒙 − 𝟏)𝟐 + ............ for 0 < 𝑥 < 2.

Sol : Let f(x) = √𝑥

We have to prove that the expansion of √𝑥 is in powers of (x-1) i.e. It is aTaylor’s series of f(x) about x
=1
𝑥−1 (𝑥−1)2 (𝑥−1)3 (𝑥−1)4
It is given by f(x) = f(1) + 𝑓 ′ (1) + 𝑓 ′′ (1) + 𝑓 ′′′ (1) + 𝑓 ′𝑣 (1) ...............
1! 2! 3! 4!
(1)
1
Now f(x) = √𝑥 = 𝑥 2 ⇒ f(1) = 1
1
1 1
𝑓 ′ (𝑥 ) = 𝑥 − 2 ⇒ 𝑓 ′ (1) = 2
2
3
1 1
𝑓 ′′ (𝑥 ) = − 4 𝑥 −2 ⇒ 𝑓 ′′ (1) = − 4
5
3 3
𝑓 ′′′ (𝑥 ) = 𝑥 −2 ⇒ 𝑓 ′′′ (1) = 8
8
7
15 15
𝑓 ′𝑣 (𝑥 ) = − 𝑥 −2 𝑓 ′𝑣 (1) = − etc.
16 16

Substitute these in (1)


1 (𝑥−1)2 1 (𝑥−1)3 3 (𝑥−1)4 15
∴ √𝑥 = 1 + (x-1)(2) + (− 4) + (8) + (− 16) + .................
2! 3! 4!

𝑥−1 (𝑥−1)2 (𝑥−1)3 (𝑥−1)4


=1+ – + -5 + .........
2 8 16 128

85
86
DIFFERENTIATION INTEGRATION
𝑑 𝑒 𝑎𝑥
1. (𝑒 𝑎𝑥 ) = 𝑎. 𝑒 𝑎𝑥 1. ∫ 𝑒 𝑎𝑥 𝑑𝑥 = +𝑐
𝑑𝑥 𝑎
𝑑 𝑥 𝑛+1
2. (𝑥 𝑛 ) = 𝑛𝑥 𝑛−1 2. ∫ 𝑥 𝑛 𝑑𝑥 = +𝑐
𝑑𝑥 𝑛+1
𝑑 𝑎𝑥
3. (𝑎 𝑥 ) = 𝑎 𝑥 log 𝑎 3. ∫ 𝑎 𝑥 𝑑𝑥 = +𝑐
𝑑𝑥 𝑙𝑜𝑔𝑎
𝑑 1 1
4. (𝑙𝑜𝑔𝑥 ) = 4. ∫ 𝑑𝑥 = 𝑙𝑜𝑔𝑥 + 𝑐
𝑑𝑥 𝑥 𝑥
𝑑
5. (𝑠𝑖𝑛𝑥 ) = 𝑐𝑜𝑠𝑥 5.∫ 𝑠𝑖𝑛𝑥 𝑑𝑥 = −𝑐𝑜𝑠𝑥 + 𝑐
𝑑𝑥
𝑑
6. (𝑐𝑜𝑠𝑥 ) = −𝑠𝑖𝑛𝑥 6.∫ 𝑐𝑜𝑠𝑥 𝑑𝑥 = 𝑠𝑖𝑛𝑥 + 𝑐
𝑑𝑥
𝑑
7. (𝑡𝑎𝑛𝑥 ) = 𝑠𝑒𝑐 2 𝑥 7.∫ 𝑡𝑎𝑛𝑥 𝑑𝑥 = log |𝑠𝑒𝑐𝑥| + 𝑐
𝑑𝑥
𝑑
8. (𝑐𝑜𝑡𝑥 ) = −𝑐𝑜𝑠𝑒𝑐 2 𝑥 8. ∫ 𝑐𝑜𝑡𝑥 𝑑𝑥 = log |𝑠𝑖𝑛𝑥| + 𝑐
𝑑𝑥
𝑑
9. (𝑠𝑒𝑐 𝑥 ) = 𝑆𝑒𝑐 𝑥 𝑇𝑎𝑛 𝑥 9.∫ 𝑠𝑒𝑐 𝑥 𝑑𝑥 = log | sec 𝑥 + tan 𝑥 | + 𝑐
𝑑𝑥
(or)
𝜋 𝑥
=log |tan( + )| + 𝑐
4 2
𝑑
10. (𝑐𝑜𝑠𝑒𝑐 𝑥 ) = −𝐶𝑜𝑠𝑒𝑐 𝑥 𝐶𝑜𝑡 𝑥 10.∫ 𝑐𝑜𝑠𝑒𝑐 𝑥 𝑑𝑥 = log |𝑐𝑜𝑠𝑒𝑐 𝑥 − cot 𝑥 | + 𝑐
𝑑𝑥
(or)
𝑥
=log |tan | + 𝑐
2
𝑑
11. (𝑠𝑖𝑛ℎ𝑥 ) = 𝑐𝑜𝑠ℎ𝑥 11.∫ 𝑠𝑖𝑛ℎ𝑥 𝑑𝑥 = 𝑐𝑜𝑠ℎ𝑥 + 𝑐
𝑑𝑥
𝑑
12. (𝑐𝑜𝑠ℎ𝑥 ) = 𝑠𝑖𝑛ℎ𝑥 12.∫ 𝑐𝑜𝑠ℎ𝑥 𝑑𝑥 = 𝑠𝑖𝑛ℎ𝑥 + 𝑐
𝑑𝑥
𝑑
13. (𝑡𝑎𝑛ℎ𝑥 ) = 𝑠𝑒𝑐 2 ℎ𝑥 13.∫ 𝑡𝑎𝑛ℎ𝑥 𝑑𝑥 = log |𝑐𝑜𝑠ℎ𝑥| + 𝑐
𝑑𝑥
𝑑
14. (𝑐𝑜𝑡 ℎ𝑥 ) = −𝑐𝑜𝑠𝑒𝑐 2 ℎ𝑥 14.∫ cot ℎ 𝑥 𝑑𝑥 = log |𝑠𝑖𝑛ℎ𝑥| + 𝑐
𝑑𝑥
𝑑
15. (𝑠𝑒𝑐 ℎ𝑥 ) = −𝑆𝑒𝑐 ℎ𝑥 𝑇𝑎𝑛 ℎ𝑥 15.∫ 𝑠𝑒𝑐 ℎ𝑥 𝑑𝑥 = 𝑡𝑎𝑛−1 (𝑠𝑖𝑛ℎ𝑥) + 𝑐
𝑑𝑥
𝑑 𝑥
16. (𝑐𝑜𝑠𝑒𝑐 ℎ𝑥 ) = −𝐶𝑜𝑠𝑒𝑐 ℎ𝑥 𝐶𝑜𝑡 ℎ𝑥 16.∫ 𝑐𝑜𝑠𝑒𝑐 ℎ𝑥 𝑑𝑥 = 𝑙𝑛|tan h( )| + 𝑐
𝑑𝑥 2
𝑑 −1 1 1 𝑥
17. (𝑠𝑖𝑛 𝑥) = √1−𝑥2 17.∫ √𝑎2 𝑑𝑥 = 𝑠𝑖𝑛−1 ( ) +
𝑑𝑥 −𝑥 2 𝑎
𝑥
𝑐 (𝑜𝑟)−𝑐𝑜𝑠 −1 ( ) + 𝑐
𝑎
𝑑 −1 −1 1 1 𝑥 −1 𝑥
18. (𝑐𝑜𝑠 𝑥) = 18.∫ 𝑑𝑥 = 𝑡𝑎𝑛 −1 ( ) + 𝑐 (or) 𝑐𝑜𝑡 −1 ( ) +
𝑑𝑥 √1−𝑥 2 𝑎 2 +𝑥 2 𝑎 𝑎 𝑎 𝑎
𝑐

87
𝑑 −1 1 𝑥
19. (𝑐𝑜𝑡 −1 𝑥) = 19.∫ 𝑑𝑥 = 𝑐𝑜𝑠ℎ−1 ( ) + 𝑐 (or)
𝑑𝑥 1+𝑥2 √𝑥 2 −𝑎 2 𝑎
log(𝑥 + √𝑥 2 − 𝑎2 ) +𝑐
𝑑 1 1 𝑥
20. (𝑡𝑎𝑛−1 𝑥) = 20.∫ 𝑑𝑥 = 𝑠𝑖𝑛ℎ−1 ( ) + 𝑐 (or)
𝑑𝑥 1+𝑥 2 √𝑥 2 +𝑎 2 𝑎
log(𝑥 + √𝑥 2 + 𝑎2 ) + 𝑐

88
𝑑 1 1 1 𝑥−𝑎 −1 𝑥
21. (𝑠𝑒𝑐 −1 𝑥) = 21.∫ 𝑑𝑥 = log | | + 𝑐 (or) 𝑐𝑜𝑡ℎ−1 ( ) +
𝑑𝑥 |𝑥|√𝑥 2 −1 𝑥 2 −𝑎 2 2𝑎 𝑥+𝑎 𝑎 𝑎
𝑐
𝑑 −1 1 1 𝑎+𝑥 1 𝑥
22. (𝑐𝑜𝑠𝑒𝑐 −1 𝑥) = 22.∫ 𝑑𝑥 = log | | + 𝑐 (or) 𝑇𝑎𝑛ℎ −1 ( ) +
𝑑𝑥 |𝑥|√𝑥 2 −1 𝑎 2 −𝑥 2 2𝑎 𝑎−𝑥 𝑎 𝑎
𝑐
𝑑
23. (𝑢𝑣 ) = 𝑢𝑣 | + 𝑣 | 𝑢 23.∫ 𝑢𝑣 𝑑𝑥 = 𝑢 ∫ 𝑣 𝑑𝑥 − ∫(𝑢| ∫ 𝑣 𝑑𝑥)𝑑𝑥
𝑑𝑥
𝑑 𝑢 𝑢 | 𝑣−𝑣 | 𝑢
24. ( )=
𝑑𝑥 𝑣 𝑣2

∫ 𝑠𝑒𝑐 2 𝑥 𝑑𝑥 = 𝑡𝑎𝑛𝑥 + 𝑐 cos(𝑥 ± 𝑦) = 𝑐𝑜𝑠𝑥𝑐𝑜𝑠𝑦 ∓ 𝑠𝑖𝑛𝑥𝑠𝑖𝑛𝑦


∫ 𝑠𝑒𝑐 𝑥 tan 𝑥 𝑑𝑥 = 𝑠𝑒𝑐 𝑥 + 𝑐 sin(𝑥 ± 𝑦) = 𝑠𝑖𝑛𝑥𝑐𝑜𝑠𝑦 ± 𝑐𝑜𝑠𝑥𝑠𝑖𝑛𝑦
𝐶+𝐷 𝐶−𝐷
∫ 𝑐𝑜𝑠𝑒𝑐 2 𝑥 𝑑𝑥 = −𝑐𝑜𝑡𝑥 + 𝑐 𝑠𝑖𝑛𝐶 + 𝑠𝑖𝑛𝐷 = 2 sin ( 2
) 𝑐𝑜𝑠( 2
)
𝐶−𝐷 𝐶+𝐷
∫ 𝑐𝑜𝑠𝑒𝑐 𝑥 cot 𝑥 𝑑𝑥 = −𝑐𝑜𝑠𝑒𝑐 𝑥 + 𝑐 𝑠𝑖𝑛𝐶 − 𝑠𝑖𝑛𝐷 = 2 sin (
2
) 𝑐𝑜𝑠(
2
)
𝑥 𝑎2 𝑥 𝐶+𝐷 𝐶−𝐷
∫ √𝑥 2 + 𝑎2 𝑑𝑥 = 𝑎 √𝑥 2 + 𝑎2 + 𝑠𝑖𝑛ℎ−1 (𝑎) + 𝑐 𝑐𝑜𝑠𝐶 + 𝑐𝑜𝑠𝐷 = 2 cos ( ) 𝑐𝑜𝑠( )
2 2 2
𝑥 𝑎2 𝑥 𝐶+𝐷 𝐶−𝐷
∫ √𝑥 2 − 𝑎2 𝑑𝑥 = 𝑎 √𝑥 2 − 𝑎2 − 𝑐𝑜𝑠ℎ−1 (𝑎) + 𝑐 𝑐𝑜𝑠𝐶 − 𝑐𝑜𝑠𝐷 = −2 sin ( ) 𝑠𝑖𝑛( )
2 2 2
𝑥 𝑎2 𝑥
∫ √𝑎2 − 𝑥 2 𝑑𝑥 = 𝑎 √𝑎2 − 𝑥 2 + 𝑠𝑖𝑛−1 (𝑎) + 𝑐 sin(𝐴 + 𝐵) + sin(𝐴 − 𝐵) = 2𝑠𝑖𝑛𝐴𝑐𝑜𝑠𝐵
2
𝑒 𝑎𝑥
∫ 𝑒 𝑎𝑥 𝑠𝑖𝑛𝑏𝑥𝑑𝑥 = 𝑎2+𝑏2 [𝑎𝑠𝑖𝑛𝑏𝑥 − 𝑏𝑐𝑜𝑠𝑏𝑥 ] + 𝑐 sin(𝐴 + 𝐵) − sin(𝐴 − 𝐵) = 2𝑐𝑜𝑠𝐴𝑠𝑖𝑛𝐵
𝑒 𝑎𝑥
∫ 𝑒 𝑎𝑥 𝑐𝑜𝑠𝑏𝑥𝑑𝑥 = 𝑎2 +𝑏2 [𝑎𝑐𝑜𝑠𝑏𝑥 + 𝑏𝑠𝑖𝑛𝑏𝑥 ] + 𝑐 cos(𝐴 + 𝐵) + cos(𝐴 − 𝐵) = 2𝑐𝑜𝑠𝐴𝑐𝑜𝑠𝐵
𝑠𝑖𝑛2 𝑥 + 𝑐𝑜𝑠 2 𝑥 = 1 cos(𝐴 + 𝐵) − cos(𝐴 − 𝐵) = −2𝑠𝑖𝑛𝐴𝑠𝑖𝑛𝐵
𝑥 𝑥2 𝑥3
𝑠𝑒𝑐 2 𝑥 − 𝑡𝑎𝑛2 𝑥 = 1 𝑒 𝑥 = 1 + 1! + 2!
+ 3!
+ ⋯ … ….
𝑥3 𝑥5 𝑥7
𝑐𝑜𝑠𝑒𝑐 2 𝑥 − 𝑐𝑜𝑡 2 𝑥 = 1 𝑠𝑖𝑛𝑥 = 𝑥 − 3!
+ 5!
− 7!
+ ⋯ … ..
𝑥2 𝑥4 𝑥6
𝑐𝑜𝑠2𝑥 = 𝑐𝑜𝑠 2 𝑥 − 𝑠𝑖𝑛2 𝑥 𝑐𝑜𝑠𝑥 = 1 − 2!
+ 4!
− 6!
+ ⋯ … ..
𝑥2 𝑥3 𝑥4
= 2𝑐𝑜𝑠 2 𝑥 − 1 log(1 + 𝑥 ) = 𝑥 − + − … … ….
2 3 4
𝑥 𝑥2 𝑥3
= 1 − 2𝑠𝑖𝑛2 𝑥 log(1 − 𝑥 ) = −( + + + ⋯…….)
1 2 3
𝑠𝑖𝑛2𝑥 = 2𝑠𝑖𝑛𝑥𝑐𝑜𝑠𝑥
𝑠𝑖𝑛3𝑥 = 3𝑠𝑖𝑛𝑥 − 4𝑠𝑖𝑛3 𝑥 (1 + 𝑥)−1 = 1 − 𝑥 + 𝑥 2 − 𝑥 3 + 𝑥 4 + ⋯.
2𝑡𝑎𝑛𝑥
𝑡𝑎𝑛2𝑥 = 2
(1 − 𝑥)−1 = 1 + 𝑥 + 𝑥 2 + 𝑥 3 + 𝑥 4 + ⋯.
1−𝑡𝑎𝑛 𝑥
𝑡𝑎𝑛𝑥±𝑡𝑎𝑛𝑦
tan(𝑥 ± 𝑦) = (1 + 𝑥)−2 = 1 − 2𝑥 + 3𝑥 2 − 4𝑥 3 + 𝑐𝑜𝑠3𝑥 =
1∓𝑡𝑎𝑛𝑥𝑡𝑎𝑛𝑦
4𝑐𝑜𝑠 3 𝑥 − 3𝑐𝑜𝑠𝑥 (1 − 𝑥)−2 = 1 + 2𝑥 + 3𝑥 2 + 4𝑥 3 + ⋯.
89
90
=0 𝑖𝑓 𝑓 𝑖𝑠 𝑜𝑑𝑑
𝑎 𝑓 (𝑥 )𝑑𝑥 { 𝑎
∫−𝑎 = 2 ∫0 𝑓 (𝑥 )𝑑𝑥 𝑖𝑓 𝑓 𝑖𝑠 𝑒𝑣𝑒𝑛

𝑎
2𝑎 2 ∫ 𝑓 (𝑥 )𝑑𝑥 𝑖𝑓 𝑓 (2𝑎 − 𝑥 ) = 𝑓 (𝑥 )
𝑓 (𝑥 )𝑑𝑥 = { 0

0 0 𝑖𝑓 𝑓(2𝑎 − 𝑥 ) = −𝑓(𝑥 )

Cartesian to Cylindrical Co-ordinates:


𝑥 = 𝜌𝑐𝑜𝑠𝜃, 𝑦 = 𝜌𝑠𝑖𝑛𝜃, 𝑍 = 𝑧 𝑎𝑛𝑑 𝑑𝑥𝑑𝑦𝑑𝑧 = 𝜌𝑑𝑟𝑑𝜃𝑑𝑧
Cartesian to Spherical Co-ordinates:
𝑥 = 𝑟𝑠𝑖𝑛𝜃𝑐𝑜𝑠∅, 𝑦 = 𝑟𝑠𝑖𝑛𝜃𝑠𝑖𝑛∅, 𝑧 = 𝑟𝑐𝑜𝑠𝜃 and 𝑑𝑥𝑑𝑦𝑑𝑧 = 𝑟 2 𝑠𝑖𝑛𝜃𝑑𝑟𝑑𝜃𝑑∅

91
DIFFERENTIATION INTEGRATION
𝑑 𝑒 𝑎𝑥
1. (𝑒 𝑎𝑥 ) = 𝑎. 𝑒 𝑎𝑥 1. ∫ 𝑒 𝑎𝑥 𝑑𝑥 = +𝑐
𝑑𝑥 𝑎
𝑑 𝑥 𝑛+1
2. (𝑥 𝑛 ) = 𝑛𝑥 𝑛−1 2. ∫ 𝑥 𝑛 𝑑𝑥 = +𝑐
𝑑𝑥 𝑛+1
𝑑 𝑎𝑥
3. (𝑎 𝑥 ) = 𝑎 𝑥 log 𝑎 3. ∫ 𝑎 𝑥 𝑑𝑥 = +𝑐
𝑑𝑥 𝑙𝑜𝑔𝑎
𝑑 1 1
4. (𝑙𝑜𝑔𝑥 ) = 4. ∫ 𝑑𝑥 = 𝑙𝑜𝑔𝑥 + 𝑐
𝑑𝑥 𝑥 𝑥
𝑑
5. (𝑠𝑖𝑛𝑥 ) = 𝑐𝑜𝑠𝑥 5.∫ 𝑠𝑖𝑛𝑥 𝑑𝑥 = −𝑐𝑜𝑠𝑥 + 𝑐
𝑑𝑥
𝑑
6. (𝑐𝑜𝑠𝑥 ) = −𝑠𝑖𝑛𝑥 6.∫ 𝑐𝑜𝑠𝑥 𝑑𝑥 = 𝑠𝑖𝑛𝑥 + 𝑐
𝑑𝑥
𝑑
7. (𝑡𝑎𝑛𝑥 ) = 𝑠𝑒𝑐 2 𝑥 7.∫ 𝑡𝑎𝑛𝑥 𝑑𝑥 = log⁡|𝑠𝑒𝑐𝑥| + 𝑐
𝑑𝑥
𝑑
8. (𝑐𝑜𝑡𝑥 ) = −𝑐𝑜𝑠𝑒𝑐 2 𝑥 8. ∫ 𝑐𝑜𝑡𝑥 𝑑𝑥 = log⁡|𝑠𝑖𝑛𝑥| + 𝑐
𝑑𝑥
𝑑
9. (𝑠𝑒𝑐⁡𝑥 ) = 𝑆𝑒𝑐⁡𝑥⁡𝑇𝑎𝑛⁡𝑥 9.∫ 𝑠𝑒𝑐⁡𝑥 𝑑𝑥 = log⁡| sec 𝑥 + tan 𝑥 | +
𝑑𝑥
𝑐
(or)

𝜋 𝑥
=log⁡|tan⁡( + )| + 𝑐
4 2
𝑑
10. (𝑐𝑜𝑠𝑒𝑐⁡𝑥 ) = −𝐶𝑜𝑠𝑒𝑐⁡𝑥⁡𝐶𝑜𝑡⁡𝑥 10.∫ 𝑐𝑜𝑠𝑒𝑐⁡𝑥 𝑑𝑥 = log⁡|𝑐𝑜𝑠𝑒𝑐⁡𝑥 −
𝑑𝑥
cot 𝑥 | + 𝑐
(or)
𝑥
=log⁡|tan⁡ | + 𝑐
2
𝑑
11. (𝑠𝑖𝑛ℎ𝑥 ) = 𝑐𝑜𝑠ℎ𝑥 11.∫ 𝑠𝑖𝑛ℎ𝑥 𝑑𝑥 = 𝑐𝑜𝑠ℎ𝑥 + 𝑐
𝑑𝑥
𝑑
12. (𝑐𝑜𝑠ℎ𝑥 ) = 𝑠𝑖𝑛ℎ𝑥 12.∫ 𝑐𝑜𝑠ℎ𝑥 𝑑𝑥 = 𝑠𝑖𝑛ℎ𝑥 + 𝑐
𝑑𝑥
𝑑
13. (𝑡𝑎𝑛ℎ𝑥 ) = 𝑠𝑒𝑐 2 ℎ𝑥 13.∫ 𝑡𝑎𝑛ℎ𝑥 𝑑𝑥 = log⁡|𝑐𝑜𝑠ℎ𝑥| + 𝑐
𝑑𝑥
𝑑
14. (𝑐𝑜𝑡⁡ℎ𝑥 ) = −𝑐𝑜𝑠𝑒𝑐 2 ℎ𝑥 14.∫ cot ℎ 𝑥 𝑑𝑥 = log⁡|𝑠𝑖𝑛ℎ𝑥| + 𝑐
𝑑𝑥
𝑑
15. (𝑠𝑒𝑐⁡ℎ𝑥 ) = −𝑆𝑒𝑐⁡ℎ𝑥⁡𝑇𝑎𝑛⁡ℎ𝑥 15.∫ 𝑠𝑒𝑐⁡ℎ𝑥 𝑑𝑥 = 𝑡𝑎𝑛−1 (𝑠𝑖𝑛ℎ𝑥) + 𝑐
𝑑𝑥
𝑑 𝑥
16. (𝑐𝑜𝑠𝑒𝑐⁡ℎ𝑥 ) = −𝐶𝑜𝑠𝑒𝑐⁡ℎ𝑥⁡𝐶𝑜𝑡⁡ℎ𝑥 16.∫ 𝑐𝑜𝑠𝑒𝑐⁡ℎ𝑥 𝑑𝑥 = 𝑙𝑛|tan⁡h( )| + 𝑐
𝑑𝑥 2
𝑑 −1 1 1 𝑥
17. (𝑠𝑖𝑛 𝑥) = √1−𝑥2 17.∫ √𝑎2 𝑑𝑥 = 𝑠𝑖𝑛−1 ( ) +
𝑑𝑥 −𝑥 2 𝑎
𝑥
𝑐⁡⁡(𝑜𝑟)−𝑐𝑜𝑠 −1 ( ) + 𝑐⁡⁡⁡
𝑎
𝑑 −1 −1 1 1 𝑥
18. (𝑐𝑜𝑠 𝑥) = √1−𝑥2 18.∫ 𝑑𝑥 = 𝑡𝑎𝑛 −1 ( ) + 𝑐 (or)
𝑑𝑥 𝑎 2 +𝑥 2 𝑎 𝑎
−1 𝑥
𝑐𝑜𝑡 −1 ( ) + 𝑐⁡⁡
𝑎 𝑎
𝑑 −1 1 𝑥
19. (𝑐𝑜𝑡 −1 𝑥) = 19.∫ 𝑑𝑥 = 𝑐𝑜𝑠ℎ−1 ( ) + 𝑐⁡⁡(or)
𝑑𝑥 1+𝑥2 √𝑥 2 −𝑎 2 𝑎
log(𝑥 + √𝑥 − 𝑎2 )
2 +𝑐
𝑑 1 1 𝑥
20. (𝑡𝑎𝑛−1 𝑥) = 20.∫ 𝑑𝑥 = 𝑠𝑖𝑛ℎ−1 ( ) + 𝑐⁡⁡(or)
𝑑𝑥 1+𝑥 2 √𝑥 2 +𝑎 2 𝑎
log(𝑥 + √𝑥 + 𝑎2 ) +
2 𝑐

1 1 1 𝑥−𝑎
21. (𝑠𝑒𝑐 −1 𝑥) = 21.∫ 𝑑𝑥 = log⁡| | + 𝑐 (or)
|𝑥|√𝑥 2 −1 𝑥 2 −𝑎 2 2𝑎 𝑥+𝑎
−1 𝑥
𝑐𝑜𝑡ℎ−1 ( ) + 𝑐⁡⁡
𝑎 𝑎
𝑑 −1 −1 1 1 𝑎+𝑥
22. (𝑐𝑜𝑠𝑒𝑐 𝑥) = 22.∫ 𝑑𝑥 = log⁡| |+𝑐
𝑑𝑥 |𝑥|√𝑥 2 −1 𝑎 2 −𝑥 2 2𝑎 𝑎−𝑥
1 𝑥
(or)⁡ 𝑇𝑎𝑛ℎ−1 ( ) + 𝑐⁡⁡
𝑎 𝑎
𝑑
23. (𝑢𝑣 ) = 𝑢𝑣 + 𝑣 | 𝑢 |
23.∫ 𝑢𝑣 𝑑𝑥 = 𝑢 ∫ 𝑣 𝑑𝑥 − ∫(𝑢| ∫ 𝑣⁡𝑑𝑥)𝑑𝑥
𝑑𝑥
𝑑 𝑢 𝑢 | 𝑣−𝑣 | 𝑢
24. ( )=
𝑑𝑥 𝑣 𝑣2

∫ 𝑠𝑒𝑐 2 𝑥 𝑑𝑥 = 𝑡𝑎𝑛𝑥 + 𝑐 cos(𝑥 ± 𝑦) = 𝑐𝑜𝑠𝑥𝑐𝑜𝑠𝑦 ∓ 𝑠𝑖𝑛𝑥𝑠𝑖𝑛𝑦


∫ 𝑠𝑒𝑐⁡𝑥 tan 𝑥 𝑑𝑥 = 𝑠𝑒𝑐⁡𝑥 + 𝑐 sin(𝑥 ± 𝑦) = 𝑠𝑖𝑛𝑥𝑐𝑜𝑠𝑦 ± 𝑐𝑜𝑠𝑥𝑠𝑖𝑛𝑦
𝐶+𝐷 𝐶−𝐷
∫ 𝑐𝑜𝑠𝑒𝑐 2 𝑥 𝑑𝑥 = −𝑐𝑜𝑡𝑥 + 𝑐 𝑠𝑖𝑛𝐶 + 𝑠𝑖𝑛𝐷 = 2 sin ( 2
) 𝑐𝑜𝑠( 2
)
𝐶−𝐷 𝐶+𝐷
∫ 𝑐𝑜𝑠𝑒𝑐⁡𝑥 cot 𝑥 𝑑𝑥 = −𝑐𝑜𝑠𝑒𝑐⁡𝑥 + 𝑐 𝑠𝑖𝑛𝐶 − 𝑠𝑖𝑛𝐷 = 2 sin ( 2
) 𝑐𝑜𝑠( 2
)
𝑥 𝑎2 𝑥
∫ √𝑥 2 + 𝑎2 𝑑𝑥 = 𝑎 √𝑥 2 + 𝑎2 + 𝑠𝑖𝑛ℎ−1 (𝑎) + 𝑐 𝑐𝑜𝑠𝐶 + 𝑐𝑜𝑠𝐷 =
2
𝐶+𝐷 𝐶−𝐷
2 cos ( ) 𝑐𝑜𝑠( )
2 2
𝑥 𝑎2 𝑥
∫ √𝑥 2 − 𝑎2 𝑑𝑥 = 𝑎 √𝑥 2 − 𝑎2 − 𝑐𝑜𝑠ℎ−1 (𝑎) + 𝑐 𝑐𝑜𝑠𝐶 − 𝑐𝑜𝑠𝐷 =
2
𝐶+𝐷 𝐶−𝐷
−2 sin ( ) 𝑠𝑖𝑛( )
2 2
𝑥 𝑎2 𝑥
∫ √𝑎2 − 𝑥 2 𝑑𝑥 = 𝑎 √𝑎2 − 𝑥 2 + 𝑠𝑖𝑛−1 ( ) + 𝑐 sin(𝐴 + 𝐵) + sin(𝐴 − 𝐵) =
2 𝑎
2𝑠𝑖𝑛𝐴𝑐𝑜𝑠𝐵
𝑒 𝑎𝑥
∫ 𝑒 𝑎𝑥 𝑠𝑖𝑛𝑏𝑥𝑑𝑥 = 𝑎2+𝑏2 [𝑎𝑠𝑖𝑛𝑏𝑥 − 𝑏𝑐𝑜𝑠𝑏𝑥 ] + 𝑐 sin(𝐴 + 𝐵) − sin(𝐴 − 𝐵) =
2𝑐𝑜𝑠𝐴𝑠𝑖𝑛𝐵
𝑒 𝑎𝑥
∫ 𝑒 𝑎𝑥 𝑐𝑜𝑠𝑏𝑥𝑑𝑥 = 𝑎2 +𝑏2 [𝑎𝑐𝑜𝑠𝑏𝑥 + 𝑏𝑠𝑖𝑛𝑏𝑥 ] + 𝑐 cos(𝐴 + 𝐵) + cos(𝐴 − 𝐵) =
2𝑐𝑜𝑠𝐴𝑐𝑜𝑠𝐵
𝑠𝑖𝑛2 𝑥 + 𝑐𝑜𝑠 2 𝑥 = 1 cos(𝐴 + 𝐵) − cos(𝐴 − 𝐵) =
−2𝑠𝑖𝑛𝐴𝑠𝑖𝑛𝐵
𝑥 𝑥2 𝑥3
𝑠𝑒𝑐 2 𝑥 − 𝑡𝑎𝑛2 𝑥 = 1 𝑒 𝑥 = 1 + 1! + 2!
+ 3!
+ ⋯ … ….

𝑥3 𝑥5 𝑥7
𝑐𝑜𝑠𝑒𝑐 2 𝑥 − 𝑐𝑜𝑡 2 𝑥 = 1 𝑠𝑖𝑛𝑥 = 𝑥 − 3!
+ 5!
− 7!
+ ⋯ … ..
𝑥2 𝑥4 𝑥6
𝑐𝑜𝑠2𝑥 = 𝑐𝑜𝑠 2 𝑥 − 𝑠𝑖𝑛2 𝑥 𝑐𝑜𝑠𝑥 = 1 − 2!
+ 4!
− 6!
+ ⋯ … ..
𝑥2 𝑥3 𝑥4
⁡⁡ = 2𝑐𝑜𝑠 2 𝑥 − 1 log(1 + 𝑥 ) = 𝑥 − + − … … ….
2 3 4
𝑥 𝑥2 𝑥3
= 1 − 2𝑠𝑖𝑛2 𝑥 log(1 − 𝑥 ) = −( + + +
1 2 3
⋯…….)
𝑠𝑖𝑛2𝑥 = 2𝑠𝑖𝑛𝑥𝑐𝑜𝑠𝑥
𝑠𝑖𝑛3𝑥 = 3𝑠𝑖𝑛𝑥 − 4𝑠𝑖𝑛3 𝑥 (1 + 𝑥)−1 = 1 − 𝑥 + 𝑥 2 − 𝑥 3 + 𝑥 4 +
⋯.
2𝑡𝑎𝑛𝑥
𝑡𝑎𝑛2𝑥 = 2
(1 − 𝑥)−1 = 1 + 𝑥 + 𝑥 2 + 𝑥 3 + 𝑥 4 +
1−𝑡𝑎𝑛 𝑥
⋯.
𝑡𝑎𝑛𝑥±𝑡𝑎𝑛𝑦
tan(𝑥 ± 𝑦) = (1 + 𝑥)−2 = 1 − 2𝑥 + 3𝑥 2 − 4𝑥 3 +
1∓𝑡𝑎𝑛𝑥𝑡𝑎𝑛𝑦
3
𝑐𝑜𝑠3𝑥 = 4𝑐𝑜𝑠 𝑥 − 3𝑐𝑜𝑠𝑥 (1 − 𝑥)−2 = 1 + 2𝑥 + 3𝑥 2 +
4𝑥 3 + ⋯.

= 0⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝑖𝑓⁡𝑓⁡𝑖𝑠⁡𝑜𝑑𝑑
𝑎 𝑓 (𝑥 )𝑑𝑥⁡⁡⁡ { 𝑎
∫−𝑎 = 2 ∫0 𝑓 (𝑥 )𝑑𝑥⁡𝑖𝑓⁡𝑓⁡𝑖𝑠⁡𝑒𝑣𝑒𝑛

𝑎
2𝑎 2 ∫ 𝑓 (𝑥 )𝑑𝑥⁡𝑖𝑓⁡𝑓 (2𝑎 − 𝑥 ) = 𝑓 (𝑥 )
𝑓 (𝑥 )𝑑𝑥 = ⁡⁡⁡ { 0

0 0⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝑖𝑓⁡𝑓(2𝑎 − 𝑥 ) = −𝑓 (𝑥 )

Cartesian to Cylindrical Co-ordinates:


𝑥 = 𝜌𝑐𝑜𝑠𝜃,⁡⁡⁡𝑦 = 𝜌𝑠𝑖𝑛𝜃,⁡⁡⁡𝑍 = 𝑧⁡𝑎𝑛𝑑⁡⁡𝑑𝑥𝑑𝑦𝑑𝑧 = 𝜌𝑑𝑟𝑑𝜃𝑑𝑧
Cartesian to Spherical Co-ordinates:
𝑥 = 𝑟𝑠𝑖𝑛𝜃𝑐𝑜𝑠∅,⁡⁡⁡𝑦 = 𝑟𝑠𝑖𝑛𝜃𝑠𝑖𝑛∅,⁡⁡⁡𝑧 = 𝑟𝑐𝑜𝑠𝜃 and 𝑑𝑥𝑑𝑦𝑑𝑧 =
𝑟 2 𝑠𝑖𝑛𝜃𝑑𝑟𝑑𝜃𝑑∅

You might also like