0% found this document useful (0 votes)
1 views

Applications of Matrices Multiplication

This document presents a concise proof of the multiplicative property of determinants and a constructive formula for rotation matrices in Rn. It emphasizes the geometric aspects of linear algebra, aiming to enhance student understanding and interest. Additionally, it classifies invariant subspaces of equiangular rotations in four-dimensional space, providing a practical approach to defining rotations without relying on complex eigenvalue calculations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Applications of Matrices Multiplication

This document presents a concise proof of the multiplicative property of determinants and a constructive formula for rotation matrices in Rn. It emphasizes the geometric aspects of linear algebra, aiming to enhance student understanding and interest. Additionally, it classifies invariant subspaces of equiangular rotations in four-dimensional space, providing a practical approach to defining rotations without relying on complex eigenvalue calculations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

APPLICATIONS OF MATRICES MULTIPLICATION TO

DETERMINANT AND ROTATIONS FORMULAS IN Rn


arXiv:1010.3729v1 [math.HO] 18 Oct 2010

ALEX GOLDVARD AND LAVI KARP

Abstract. This note deals with two topics of linear algebra. We


give a simple and short proof of the multiplicative property of the
determinant and provide a constructive formula for rotations. The
derivation of the rotation matrix relies on simple matrix calcula-
tions and thus can be presented in an elementary linear algebra
course. We also classify all invariant subspaces of equiangular ro-
tations in 4D.

1. Introduction
This article aims to promote several geometric aspects of linear alge-
bra. The geometric motivation often leads to simple proofs in addition
to increasing the students’ interest of this subject and providing a solid
basis. Students with a confident grasp of these ideas will encounter lit-
tle difficulties in extending them to more abstract linear spaces. Many
geometrical operations can be rephrased in the language of vectors and
matrices. This includes projections, reflections and rotations, opera-
tions which have numerous applications in engineering, physics, chem-
istry and economic, and therefore their matrix’s representation should
be included in basic linear algebra course.
There are basically two attitudes in teaching linear algebra. The
abstract one which deals with formal definitions of vector spaces, lin-
ear transformations etc. Contrary to the abstract vector spaces, the
analytic approach deals mainly with the vector space Rn and provides
the basic concepts and proofs in these spaces.
However, when the the analytic approach deals with the definition of
linear transformations it uses the notion of representation of the matrix
of the transformation in arbitrary basis and thus it actually goes back
to the abstract setting. To clarify this issue, let us consider for example
the calculations a rotation T of a vector x in R3 . In this way one start
2010 Mathematics Subject Classification. Primary 15B10, 97H60; Secondary
15A04, 15A15.
Key words and phrases. Rotation matrix, matrices multiplication, determinant,
orthogonal matrices, equiangular rotations.
1
2 A. GOLDVARD AND L. KARP

with choosing an appropriate basis B, and calculating the matrix of


the given transformation in this basis [T ]B . In the second step one has
to compute the transformation matrix P from the standard basis to
the basis B and its inverse P −1 . The final step consists of applying
the matrix P −1 [T ]B P to x. This cumbersome machinery is common
for both the abstract and the analytic approaches. In addition, it take
an essential effort to teach all the necessary details in order to use this
non-practicable formula.
It is amazing why one should use this complicate formula while the
Rodrigues’ rotation formula does it efficiently. Although its simplic-
ity, Rodrigues’ formula does not appear in the current linear algebra
textbooks. Its proof is elementary, but require non-trivial geometric
insight.
In this note we present a new proof of the matrix’s representation of
Rodrigues’ formula. The essential point is that we regard the multipli-
cation Ax, of a vector x by a matrix A, simultaneously as an algebraic
operation and geometrical transformation (similarly to Lay [1]). This
enable us to derive the rotation formula in the three dimensional space.
In higher dimensional spaces we first propose a geometric definition of
a rotation and after that we derive the formula in a similar manner to
the three dimensional spaces. Having calculate the matrix of rotation
according to that definition, we show it is identical to the common
definition of rotation, that is, an orthogonal matrix with determinant
one.
We also consider the multiplication of two matrices AB as multipli-
cations of the columns of B by the matrix A. Applying this point of
view we provide a simple proof of the multiplicative property of the de-
terminants. The standard proof of this property is often being skipped
from the class room since it is considered as too complicated. The
proof which we present here could easily be thought in the beginning
of a linear algebra course.

2. Basic facts and notations


We recall the definitions of multiplication of a vector by a matrix
and the multiplications of two matrices. Both definitions rely solely
on the basic two operations of vectors in Rn , namely, addition and
multiplication by a scalar.
APPLICATIONS OF MATRICES MULTIPLICATION 3

We denote m × n matrix A by [a1 , a2 , . . . , an ], where {a1 , a2 , . . . , an }


x1
 

are the columns of A. Let x =  ...  be a vector in Rn , then


xn

(1) Ax = x1 a1 + x2 a2 + . . . + xn an .

This means that Ax is simply a linear combination of the columns


of the matrix A. Conversely, any linear combination of n vectors
{a1 , a2 , . . . , an } can be written as a matrix multiplication. Note that
beside of Ax being a linear combination, we can interpret it as a trans-
formation from Rn to Rm by corresponding to each x ∈ Rn the vectors
Ax = y ∈ Rm . Obviously this operation has the linearity property:

(2) A(αu + βv) = αAu + βAv.


Thus any linear transformation can be written as a multiplication of
vectors by a matrix and therefore the formal definition the linear trans-
formations from Rn to Rm seems to be superfluous.
Most of the textbooks define a matrix multiplications by the row-
column rule. But the original definition of Cayley is by means of a
composition of two linear substitutions (see e.g. [4, 6]). This means
that if A is a m × n matrix and B n × k, then the matrix C = AB is
defined through the identity Cx = A (Bx), where x ∈ Rk . From (1)
and (2) it immediately follows that

(3) AB = [Ab1 , Ab2 , . . . , Abk ] ,

where {b1 , b2 , . . . , bk } are the columns of the matrix B.


Another useful way to multiply matrices is by the column row rule,
that is,

(4) AB = a1 bT1 + . . . + an bTn ,

where bT1 , . . . , bTn are rows vectors of the transpose B T . This type of
multiplication will be used in Section 4.

3. Multiplicative property of determinant


Let A be n × n matrix with coefficients [aij ]nij=1. The determinant of
A is defined to be the scalar
X
(5) det(A) = σ(p)a1p1 a2p2 . . . anpn ,
p
4 A. GOLDVARD AND L. KARP

where the sum is taken over the n! permutations p = (p1 , p2 , . . . pn ) of


(1, 2, . . . , n) and
(
+1 if p is even permutation
σ(p) = .
−1 if p is odd permutation
Theorem 1. Suppose A and [u, v, . . . , z] are two n × n matrices, then
(6) det ([Au, Av, . . . , Az]) = det(A) det([u, v, . . . , z]).
Proof Denote A by [a1 , a2 , . . . , an ] and let
u1 v1 z1
     
 u2   v2   z2 
u=  ...  , v =  ...  , z =  ...  .
    

un vn zn
We are now using formula (1), the linearity of the determinant and the
fact that a matrix having two equal columns its determinant is zero.
All these result with
" n n n
#!
X X X
det([Au, Av, . . . , Az]) = det ui ai , vi ai , . . . , zi ai
i=1 i=1 i=1
X
= (up1 vp2 . . . zpn ) det([ap1 , ap2 , . . . , apn ]).
p

Since the determinant changes sign when two columns are interchanged,
we have
det([ap1 , ap2 , . . . , apn ]) = σ(p) det(A).
Therefore,
X
det([Au, Av, . . . , Az]) = det(A) σ(p)up1 vp2 . . . zpn
p

= det(A) det([u, v, . . . , z]).



The above Theorem has an important geometric interpretation and
we discuss it here in R2 . It is well known that if u, v are two collinear
vectors, then det([u, v]) is the area of parallelogram spanned by {u, v}.
Now if det(A) 6= 0, then {Au, Av} span also a parallelogram. There-
fore the number det(A) is the proportion between the areas of the
parallelograms spanned by {u, v} and {Au, Av}.
Note that Theorem 1 gives the multiplicative property of the deter-
minant. Indeed, let B = [u, v, . . . , z], then AB = [Au, Av, . . . , Az] and
(6) becomes
det (AB) = det(A) det(B).
APPLICATIONS OF MATRICES MULTIPLICATION 5

4. The derivation of a formula for


a rotation matrix in Rn
First of all let us discuss the definition of a rotation in Rn . The com-
mon definition of a rotation is by means of an orthogonal matrix with
determinant one. Here we provide another definition based on geo-
metric considerations. We are aware that such definition was probably
given in the past, but we could note traced it. Its advantage is being
practicable and the computation of the rotation’s matrix does not uti-
lize eigenvalues and eigenvectors. We shall then verify the equivalence
of the two definitions.
It is rather simple to define a rotation in R2 . A linear transformation
R is a rotation if the angle between the vectors Rx and x is a constant
for all x ∈ R2 . From this definition follows that
   
1 0 cos α − sin α
R = ,
0 1 sin α cos α

where the rotation is by an angle α counterclockwise. Therefore


 
cos α − sin α
(7) R= = cos αI + sin αJ
sin α cos α
 
0 −1
where I is the identity matrix and J = . Formula (7) implies
1 0
that ||Rx|| = ||x|| for each x ∈ R2 and that confirms our geometric
intuition. Matrices which preserve norm are called orthogonal matrices.
Their determinant is ±1 and rotations are orthogonal matrices with
determinant one.
The definition of a rotation in R3 is slightly more involved. A linear
transformation R is called rotation if there exists two dimensional sub-
space Π of R3 (the plane of the rotation) such that the angle between
vectors Rx and x is a constant for all x ∈ Π, and Ry = y for each y
orthogonal to Π (the axis of the rotation). Euler’s theorem about the
rigid motion of the sphere with fixed center justifies this definition.
In order to calculate the matrix R we pick two orthonormal vectors
a, b ∈ Π and a unit vector c which is orthogonal to Π and such that
the triple {a, b, c} is right-handed. Applying the rotation R to these
vectors results with

Ra = cos αa + sin αb, Rb = − sin αa + cos αb, Rc = c.

We write the above equalities in a matrix form RP = Q, where P =


[a, b, c] and Q = [Ra, Rb, Rc]. Since P is an orthogonal matrix, R =
6 A. GOLDVARD AND L. KARP

QP T and calculating QP T by means of (4), we get that


 T
a
R = [Ra, Rb, Rc] bT  = (Ra)aT + (Rb)bT + (Rc)cT
(8) cT
= cos α aaT + bbT + sin α baT − abT + ccT .
 

The skew symmetric matrix (baT −abT ) is the matrix representation


of the cross product c × x. To see this note that (baT − abT )c = 0,
(baT − abT )a = b and (baT − abT )b = −a. Hence
Rx = cos αx + (1 − cos α)ccT x + sin α(c × x),
which is the known Rodrigues’ formula.
Formula (8) resembles to a large extent the two dimensional formula
(7). The matrix (aaT + bbT ) is the projection on the plane Π, ccT is
the projection on the line orthogonal to Π and
(9) (baT − abT )2 = −(aaT + bbT ).
Since the rotation is actually in the plane Π, we see that (aaT + bbT )
corresponds I and (baT − abT ) corresponds J in formula (7). In both
formulas, cos α is the coefficient of a symmetric matrix and sin α is the
coefficient of an anti-symmetric matrix.
It is easy to check that the matrix R(α) := R in (8) is an orthogonal
matrix with determinate one. Indeed, relation (9) and the orthogonal-
ity of {a, b, c} imply that
R(α)RT (α) = cos2 α + sin2 α aaT + bbT + ccT = I
 

and hence det(R(α)) = ±1. Letting limα→0 R(α) = I and using the
continuity of the determinants, we see that det(R(α)) = 1.
We turn now to rotations in R4 . It turns out that rotations in R4
can be defined in a similar way to rotations in R2 and R3 . We say that
a linear transformation R is a rotation if there exists two dimensional
subspace Π of R4 such that the angle between vectors Rx and x is a
constant for all x ∈ Π, and the angle between vectors Ry and y is a
constant for all y ∈ Π⊥ , the orthogonal complement of Π.
The calculation are done in a similar manner as we did in R3 . Pick
a, b ∈ Π and c, d ∈ Π⊥ such that the set {a, b, c, d} is an orthonormal
basis. Let R = R(α, β) be the rotation matrix with rotation’s angles α
in the plane Π and β in the orthogonal complement Π⊥ . Then
Ra = cos αa + sin αb, Rb = − sin αa + cos αb,
Rc = cos βc + sin βd, Rd = − sin βc + cos βd.
APPLICATIONS OF MATRICES MULTIPLICATION 7

Set P = [a, b, c, d] and Q = [Ra, Rb, Rc, Rd], since the matrix P is
orthogonal, R = QP T and hence
R = (Ra)aT + (Rb)bT + (Rc)cT + (Rd)dT
= cos α aaT + bbT + sin α baT − abT
 
(10)
+ cos β ccT + ddT + sin β dcT − cdT .
 

We can now easily distinguish between two types of 4D-rotations. If


β = 0, then the rotation is simple, that is, Ry = y for all y ∈ Π⊥ .
Otherwise, both planes Π and Π⊥ rotate simultaneously and this type
is called a double rotation.
If one doubts whether the matrix R in (10) is an orthogonal ma-
trix with determinant one, then the following simple calculation will
convince him. Since
R(α, β)RT (α, β)
= cos2 α + sin2 α aaT + bbT + cos2 β + sin2 β ccT + ddT
   

=I,
R(α, β) is an orthogonal matrix and by letting α and β go to zero, we
get that its determinant is one.
We are now in position to extent the geometric definition of rotations
to Rn for arbitrary positive integer n. For n = 2p we say that a linear
transformation R is a rotation if there exist p mutual orthogonal planes
Πk such that the angle between the vectors Rx and x is a constant for
all x ∈ Πk , k = 1, ..., p. For n = 2p + 1 we require that there are p
mutual orthogonal planes Πk and in addition a line L is orthogonal to
Πk , k = 1, . . . , p such that R behaves the same as in the even on the
planes Πk and Ry = y for all y ∈ L. The extension of formulas (8) and
(10) to arbitrary dimension is obvious. In R2p there is an orthonormal
basis {(a1 , b1 ), ..., (an , bn )} such that
p
X
cos αk ak aTk + bk bTk + sin αk bk aTk − ak bTk .
 
(11) R=
k=1

In odd dimension 2p + 1 there is an orthonormal basis {(a1 , b1 ), ...,


(an , bn ), c} such that
p
X
cos αk ak aTk + bk bTk + sin αk bk aTk − ak bTk + ccT .
 
(12) R=
k=1

Similarly to the rotation formulas (8) and (10) one can check that
(11) and (12) are orthogonal matrices with determinate one.
8 A. GOLDVARD AND L. KARP

Formulas (11) and (12) were derived by [5] but in a different way.
The advantage of the derivation given here is being constructive an
addition to being appropriate for an elementary linear algebra course.
Formulas (11) and (12) can be written in a vectors’ form
p
X
Rx = cos αk yk + sin αk zk
k=1

and
p
X
Rx = cos αk yk + sin αk zk + (cT x)c,
k=1

where yk is the projection of vector x on the plane Πk and zk is the


rotation of yk by an angle π2 in the plane Πk .

4.1. Invariant subspaces of equiangular subspaces of rotations


in R4 . A rotation R in R4 is called equiangular rotations or isoclinic
rotations if the planes Π and its orthogonal complement Π⊥ rotate with
the same angle (see e.g. [2, 3]). When α 6= β, then the planes Π and Π⊥
are the only invariant subspaces under the rotation R. However, when
α = β, then there are infinitely many two dimensional invariant planes
(see e.g. [5]). We shall see here that this interesting phenomenon is
a simple consequence of the formula (10) and we shall also classify all
the invariant planes.
To see this we note that when α = β, then (10) becomes

R = cos αI + sin αJ,



where I is the identity on R4 and J = baT − abT + dcT − cdT .
Hence a subspace U of R4 is an invariant subspace of R if and only if
it is invariant subspace of the matrix J.
Now J is a skew-symmetric matrix satisfying J 2 = −I. Therefore it
has no real eigenvalues and this implies that any non-trivial invariant
subspace must has dimension two. Since J 2 = −I, span{u, Ju} is an
invariant subspace for any u ∈ R4 . On the other hand, if U is a non-
trivial invariant subspace of R, then Ju ∈ U for any u ∈ U. Hence U
must be spanned by these vectors. Thus we have obtained a complete
classification of the invariant subspaces of equiangular rotations which
is independent of the rotation angle α.
It follows from formulas (11) and (12) that if all the angles αk are
equal, then there are infinitely many invariant
Pp subspaces and
 each one
T T
of them is spanned by a vector u and k=1 bk ak − ak bk u.
APPLICATIONS OF MATRICES MULTIPLICATION 9

5. Concluding Remarks

PMany vector-space textbooks use the entry-by-entry definition cij =


aik bkj for the matrices multiplications. The operation of multiplica-
tion of a vector x by a matrix A in accordance (1) bears in itself both
geometric and algebraic properties. Therefore a decent understanding
of it should be prior to the formal definition of matrices multiplication.
After that the matrices’ multiplication in Cayles’s spirit follows nat-
urally. The Cayley’s definition (3) and the column-row rule (4) have
many advantages. In many cases they makes the computations easier
in addition to increases the comprehension. This note emphasizes two
aspects of that attitude.
References
1. D.C. Lay, Linear Algebra and Its Applications, 3rd Edition, Addison-Wesley
Higher Education Group, 2003.
2. P. Lounesto, Clifford Algebras and Spinors, 2nd Edition, London Mathematical
Society, Lecture Notes Series 286, Cambridge Press 2001.
3. H.P. Manning Geometry of Four Dimension , The Macmillan Company, 1914.
4. C.D. Meyer, Matrix Analysis and Applied Linear Algebra, SIAM, 2000.
5. S. A. Schelkunoff, On rotations in ordinary and null spaces, American Journal
of Mathematics, 53, No. 1 (1931), 175-185.
6. A. Tucker, The growing importance of linear algebra in undergraduate mathe-
matics, The College Mathematics Journal, 24, No. 1 (1993), 3-9.
E-mail address: [email protected]

Department of Mathematics, ORT Braude College, P.O. Box 78,


21982 Karmiel, Israel

E-mail address: [email protected]

Department of Mathematics, ORT Braude College, P.O. Box 78,


21982 Karmiel, Israel

You might also like