Chapter 11
Chapter 11
Tensors
Actually, weve been doing tensor analysis all along. All we will do now is to add a few elements
which will make things look even more like classical tensor analysis. One element of this
look is a slight extension of the Kronecker delta notation. For any pair of integers i and j ,
well let
i
j
=
i
j
=
_
0 if i = j
1 if i = j
.
Well treat these as constant scalar elds on our space.
In all the following, we assume that we are dealing with position in some N-dimensional
space, and that
{(x
1
, x
2
, . . . , x
N
)} , {h
1
, h
2
, . . . , h
N
} and {e
1
, e
2
, . . . , e
N
} .
is some coordinate system with associated scaling factors and unit tangent vectors. We will not
automatically assume this is an orthogonal system.
11.1 The Reciprocal Basis Fields
Remember that, at each point, {e
1
, e
2
, . . . , e
N
} is a basis for the tangent vector space at that
point. There are two other bases that are commonly used for each tangent space,
{
1
,
2
, . . . ,
N
} and {
1
,
2
, . . . ,
N
} .
The rst, weve already seen. For each k ,
k
=
r
x
k
= h
k
e
k
.
We often used e
k
s instead of the
k
s simply because we normally prefer unit basis vectors.
The other basis, {
1
,
2
, . . . ,
N
} , is the basis reciprocal to {
1
,
2
, . . . ,
N
} . Recalling the
denition of reciprocal bases from section 2.6, we recall that each
i
at each point in space is
chosen to be the single vector such that:
1.
i
is orthogonal to all the
j
s except
i
(hence
i
j
= 0 if i = j ).
11/28/2011
Tensors Chapter & Page: 112
2
1
2 2
2
3
3
2
Figure 11.1: A basis for a two-dimensional vector space (in black) with
1
=
1
/
2
and
2
=
3
/
2
, and the corresponding reciprocal basis (in red).
2. The scalar projection of
i
onto
i
is
1
/
(hence
i
i
= 1 ).
More concisely,
i
j
=
i
j
. (11.1)
Keep in mind that neither basis need be orthogonal. So the above does not (necessarily) mean
that each
i
is parallel to
i
(see gure 11.1).
?
i
=
1
h
i
e
i
.
11.2 Co- and Contravariant Components of Vector
Fields
Denitions
Now let F be any vector eld. At each point, it can be expressed in terms of either of these
bases,
F =
N
i =1
F
i
i
or F =
N
j =1
F
j
j
.
The F
i
s (i.e., the components with respect to the
i
s ) are called the contravariant components
of F (with respect to the given coordinate system), and the F
i
s (i.e., the components with respect
to the
i
s ) are called the covariant components of F (with respect to the given coordinate
system).
1
It is important that you (re)do the next exercise (preferably without looking back at section
2.6). It explains why using reciprocal bases is such a clever thing to do.
1
Notice that we are superscripting both the coordinates and the contravariant components of vector elds. This
will not help keep terminology straight.
version: 11/28/2011
Tensors Chapter & Page: 113
?
Exercise 11.2: Let V and U be two vector elds. Using the co- and contravariant
components of each, show that
V U =
N
i =1
V
i
U
i
=
N
i =1
V
i
U
i
. (11.2)
As a not-very-signicant corollary of this exercise, we immediately have the following:
Corollary 11.1 (Quotient Rule for vector elds)
Suppose is a scalar eld on a region R, and we have two sets of N other scalar elds
_
V
1
, V
2
, . . . , V
N
_
and { U
1
, U
2
, . . . , U
N
}
such that
N
i =1
V
i
U
i
= everywhere in R .
Then
V =
N
i =1
V
i
i
and U =
N
i =1
U
i
i
are vector elds on R with
V U = .
A more general quotient rule is mentioned in Arfken and Weber
2
asserting the following
Let V and be, respectively, a vector eld and a scalar eld, and suppose we
have a set of N general formulas, applicable no matter what coordinate system we
have, that dene N scalar elds U
1
, U
2
, and U
N
(these U
k
s will change
with different coordinate systems). Suppose further that, no matter what coordinate
system we have, the corresponding U
k
s satisfy
N
k=1
V
k
U
k
=
where the V
k
s are the contravariant components of V with respect to the coordinate
system. Then there is a single vector eld U such that
U =
N
i =1
U
i
i
in each coordinate system.
Without additional conditions on how those N general formulas dening the U
i
s change as
coordinate systems are changed, this quotient rule cannot be accepted as truly valid.
3
Instead,
as stated, this quotient rule is more of a strong suggestion that such a single vector eld U
exists.
2
Page 141 and they only refer to rotated Cartesian systems
3
After all, for given vector and scalar elds V and , the one equation
N
k=1
V
k
U
k
= has the N unknowns
U
1
, . . . , and U
N
in each coordinate system. There are innitly many possible solutions in each coordinate system!
version: 11/28/2011
Tensors Chapter & Page: 114
Some Examples
!
i =1
r
x
i
dx
i
dt
=
N
i =1
h
i
e
i
.,,.
i
dx
i
dt
.
So
dr
dt
=
N
i =1
dx
i
dt
i
which means that
dx
i
/
dt
is the i
th
contravariant component of the vector eld
dr
/
dt
on the curve
C .
!
Example 11.2 (Covariant components of the gradient): Let be a scalar eld with
coordinate formula ,
(r) = (x
1
, x
2
, . . . , x
N
) where r (x
1
, x
2
, . . . , x
N
) .
Recall that the most general denition of the gradient of , , is that it is the vector eld
such that
d
dt
[ (r(t ))] =
dr
dt
for any differentiable position-valued function r(t ) . Recall, also that we found the formula
for to be
=
N
k=1
1
h
k
x
k
e
k
provided the coordinate system is orthogonal. From exercise 11.1, we know
k
=
1
/
h
k
e
k
when the coordinate system is orthogonal, as so the above formula can be rewritten as
=
N
k=1
x
k
k
provided the coordinate system is orthogonal. Thus, if the coordinate system is orthogonal,
then
/
x
k
is the k
th
covariant coordinate of . Could
=
N
k=1
x
k
k
be the general formula for the gradient in any coordinate system, orthogonal or not? Well, if
we assume this formula and let
r(t )
_
x
1
(t ), x
2
(t ), . . . , x
N
(t )
_
version: 11/28/2011
Tensors Chapter & Page: 115
be any (differentiable) position-valued function (as in the previous example), then by the
classical chain rule and the results from exercise 11.2 (specically, formula (11.2) for the dot
product), we have
d
dt
[ (r(t ))] =
d
dt
_
_
x
1
(t ), x
2
(t ), . . . , x
N
(t )
__
=
N
i =1
x
i
dx
i
dt
=
_
N
k=1
x
k
k
_
_
N
i =1
dx
i
dt
i
_
=
dr
dt
,
just as we should have. This strongly suggests that, indeed, the general (covariant) formula
for the gradient is
=
N
k=1
x
k
k
.
However, have not yet conrmed that this formula denes a vector eld independent of the
choice of coordinates. A real cynic may suggest the possibility that this formula with two
different nonorthogonal coordinate systems couldleadtotwodifferent vector elds that happen
to satisfy the above. This time, that cynic would be wrong the
/
x
i
s are the covariant
components of . Well just have to develop a little more theory to conrm that the above
really nice formula is, indeed, coordinate independent.
11.3 Converting Between Co- and Contravariant
Representations
Our goal is nowis to determine howto (easily) nd the covariant components of a vector eldfrom
its contravariant components, and its contravariant components from its covariant components.
We start by nding convenient formulas for expressing the vectors in either one of the bases
{
1
,
2
, . . . ,
N
} or {
1
,
2
, . . . ,
N
}
in terms of the other.
Components of the Reciprocal Bases with Respect to Each
Other
Keep in mind that, at each point,
{
1
,
2
, . . . ,
N
} and {
1
,
2
, . . . ,
N
}
are both bases for the same tangent space of vectors at that point. So each
i
can be written in
terms of the
j
s , and each
j
can be written in terms of the
i
s ,
i
=
N
k=1
i k
k
and
j
=
N
k=1
j k
k
.
version: 11/28/2011
Tensors Chapter & Page: 116
Now, also recall the relations between the dot products of these basis vectors and the metric and
the Kronecker delta,
i
j
= g
i j
and
k
j
=
k
j
.
(The rst was actually the denition of the [covariant] components of the metric [see page 829],
and the second was essentially the dening formula for the reciprocal basis.) Combining all the
above, we get
g
i j
=
i
j
=
_
N
k=1
i k
k
_
j
=
N
k=1
i k
k
j
=
N
k=1
i k
k
j
=
i j
.
So,
i
=
N
j =1
g
i j
j
for i = 1, 2, . . . , N . (11.3)
In other words, the covariant components of the metric are also the covariant components of the
i
s .
To get the contravariant components of the
j
s (the
j k
s ), we go back to elementary
matrix theory and rewrite the above formula for the
i
s in matrix form,
_
_
_
_
_
_
2
.
.
.
N
_
_
= G
_
_
_
_
_
_
2
.
.
.
N
_
_
where [G]
i j
= g
i j
.
Then, of course,
_
_
_
_
_
_
2
.
.
.
N
_
_
= G
1
_
_
_
_
_
_
2
.
.
.
N
_
_
.
Because of this relation, it is natural to dene the contravariant components of the metric (with
respect to the given coordinate system) as the corresponding entries in G
1
, and to use g
i j
to
denote these quantities. That is,
g
i j
=
_
G
1
_
i j
where [G]
mn
= g
mn
.
By the above, we then have
i
=
N
k=1
g
i k
k
for i = 1, 2, . . . , N . (11.4)
From the last formula, we see that
i
j
=
_
N
k=1
g
i k
k
_
j
=
N
k=1
g
i k
k
j
=
N
k=1
g
i k
k
j
= g
i j
,
version: 11/28/2011
Tensors Chapter & Page: 117
showing that there is a relation between the g
i j
s and the
i
s analogous to that between the
g
i j
s and the
i
s ,
i
j
= g
i j
and
i
j
= g
i j
.
From this, it follows that the g
i j
s are symmetric (since
i
j
=
j
i
). While we are at
it, lets observe that there is a relation between the components of the metric and the Kronecker
delta by simply noting that
N
k=1
g
i k
g
kj
=
_
G
1
G
_
i j
= [I]
i j
and
N
k=1
g
i k
g
kj
=
_
GG
1
_
i j
= [I]
i j
.
So,
N
k=1
g
i k
g
kj
=
i
j
and
N
k=1
g
i k
g
kj
=
i
j
. (11.5)
Co/Contra-variant Conversion for Vector Fields
Any vector eld F on our space can be written using either its covariant or contravariant com-
ponents,
F =
N
i =1
F
i
i
=
N
j =1
F
j
j
.
Using the relations
i
=
N
j =1
g
i j
j
,
i
=
N
j =1
g
i j
j
and
k
j
=
k
j
(which weve already discussed), you can easily derive the formulas for computing the F
i
s from
the F
j
s , along with the formulas for computing the F
j
s from the F
i
s :
?
k=1
g
j k
F
k
and F
i
=
N
k=1
g
i k
F
k
.
(Hint: Start with F =
N
k=1
F
k
k
and replace
k
with the appropriate formula of
j
s
from above.)
version: 11/28/2011
Tensors Chapter & Page: 118
11.4 Converting Between Two Coordinate Systems
Suppose we have a second coordinate system with reciprocal bases elds
{(x
1
, x
2
, . . . , x
N
)} , {
1
,
2
, . . . ,
N
} and {
1
,
2
, . . . ,
N
}
(where
i
=
r
/
x
i
=
r
x
i
=
N
j =1
x
j
x
i
r
x
j
=
N
j =1
x
j
x
i
j
.
So the formula for nding each
i
from the
j
s is
=
N
j =1
x
j
x
i
j
for i = 1, 2, . . . , N . (11.6a)
Likewise
i
=
N
j =1
x
j
x
i
j
for i = 1, 2, . . . , N . (11.6b)
Finding the relation between the
i
s and the
i
s is a bit more tricky. Let
=
N
k=1
i
k
k
where the
i
k
s are to be determined. To be precise, they must be the (unique) scalar elds such
that
i
j
=
i
=
_
N
n=1
i
n
n
_
_
N
k=1
x
k
x
j
k
_
= =
N
k=1
i
k
x
k
x
j
. (11.7)
Now, observe that, because the x
k
x
j
= rate x
i
varies as x
j
varies =
_
0 if j = i
1 if i = j
_
=
i
j
.
This and the chain rule give us
i
j
=
x
i
x
j
=
N
k=1
x
i
x
k
x
k
x
j
Comparing this with equation (11.7), we see that, for equation (11.7) to hold, we must have
i
k
=
x
i
x
k
.
Thus,
=
N
k=1
x
i
x
k
k
. (11.8a)
version: 11/28/2011
Tensors Chapter & Page: 119
Likewise
i
=
N
k=1
x
i
x
k
. (11.8b)
With equations (11.6) and (11.8), you can now derive what may be the most important
relations in tensor analysis.
?
, x
2
, . . . , x
N
)}
coordinate system are related to those with respect to the {(x
1
, x
2
, . . . , x
N
)} coordinate
system by
F
j
=
N
i =1
x
j
x
i
F
i
. (11.9)
(Hint: Start with F =
N
i =1
F
i
i
and apply equation (11.6b).)
b: Show that the covariant components of F with respect to the {(x
1
, x
2
, . . . , x
N
)} coor-
dinate system are related to those with respect to the {(x
1
, x
2
, . . . , x
N
)} coordinate system
by
F
i
=
N
j =1
x
j
x
i
F
j
. (11.10)
Equation (11.9) is known as the rank 1 contravariant transformation law, and equation
(11.10) is known as the rank 1 covariant transformation law. For some, these laws are the basic
dening equations for tensors.
!
Example 11.3 (the gradient): Let be a scalar eld with coordinate formulas and
(x
1
, x
2
, . . . , x
N
) with r (x
1
, x
2
, . . . , x
N
) .
In example 11.2, we noted that the orthogonal coordinate system formula for the gradient of
could be written as
=
N
i =1
x
i
i
,
which meant that the covariant components of are given by
[]
j
=
x
j
when {(x
1
, x
2
, . . . , x
N
)} is an orthogonal system.
We suspect the same formula holds even if the system is not orthogonal.
To verify this suspicion, let {(x
1
, . . . , x
N
)} be any orthogonal system (so the above
formula for []
j
holds), andlet {(x
1
, . . . , x
N
=
N
j =1
x
j
x
i
[]
j
=
N
j =1
x
j
x
i
x
j
=
x
i
_
(x
1
, x
2
, . . . , x
N
)
_
=
x
i
(x
1
, x
2
, . . . , x
N
)
_
=
x
i
,
conrming that
[]
i
=
x
i
, x
2
, . . . , x
N
)} ,
and, thus, also conrming the suspicion expressed in exercise 11.2 that
=
N
i =1
x
i
i
is a coordinate-independent formula for the gradient.
11.5 So What Is A Tensor, Anyway?
I will give you two denitions: a good one, and the traditional one.
A Good (but Long) Denition of Tensors
Suppose we have an N-dimensional vector space V . A tensor T is any single linear algebraic
object a scalar, a vector, a linear transformation of vectors, a linear transformation of linear
transformations, etc. dened on V . The rank of T refers to the number of components of T
with respect to any basis for V . In particular:
T is rank 0 T has 1 = N
0
components (i.e., T is a scalar)
T is rank 1 T has N
1
components (i.e., T is a vector)
T is rank 2 T has N
2
components (e.g., T is a linear transformation)
.
.
.
T is rank m T has N
m
components
version: 11/28/2011
Tensors Chapter & Page: 1111
Now suppose we have an N-dimensional space of positions. A rank m tensor eld T
(usually just called a tensor) is just a rank m tensor-valued function of position. That is,
T( p) = a rank k tensor for the tangent vector space at p .
So,
T is rank 0 tensor eld T( p) is a scalar for each position p
T is a scalar eld.
T is rank 1 tensor eld T( p) is a vector for each position p
T is a vector eld.
.
.
.
Given a coordinate system and associated reciprocal basis elds
{(x
1
, x
2
, . . . , x
N
)} , {
1
,
2
, . . . ,
N
} and {
1
,
2
, . . . ,
N
}
(where
k
=
r
/
x
k
), the covariant components of T denoted by T
i
or T
i j
or T
i j k
or ,
depending on the rank of T are the components of T with respect to {
1
,
2
, . . . ,
N
} , while
the contravariant components of T denoted by T
i
or T
i j
or T
i j k
or , depending on the
rank of T are the components of T with respect to {
1
,
2
, . . . ,
N
} .
This basically describes what tensors are. They are linear algebraic things dened on the
tangent vector spaces. It can be shown (much as we have done for vector elds) that, if we have
a second coordinate system with associated reciprocal basis elds
{(x
1
, x
2
, . . . , x
N
)} , {
1
,
2
, . . . ,
N
} and {
1
,
2
, . . . ,
N
}
(where
k
=
r
/
x
k
), then the covariant components of a rank k tensor eld T will satisfy the
corresponding rank k transformation covariant law:
T
i
=
N
m=1
x
m
x
i
T
m
(rank 1)
T
i j
=
N
m=1
N
n=1
x
m
x
i
x
n
x
j
T
mn
(rank 2)
T
i j k
=
N
m=1
N
n=1
N
o=1
x
m
x
i
x
n
x
j
x
o
x
k
T
mno
(rank 3)
.
.
. ,
while its contravariant components will satisfy the corresponding rank k contravariant law of
transformation:
T
i
=
N
m=1
x
i
x
m
T
m
(rank 1)
version: 11/28/2011
Tensors Chapter & Page: 1112
T
i j
=
N
m=1
N
n=1
x
i
x
m
x
j
x
n
T
mn
(rank 2)
T
i j k
=
N
m=1
N
n=1
N
o=1
x
i
x
m
x
j
x
n
x
k
x
o
T
mno
(rank 3)
.
.
. .
In addition, one can dene and deal with mixed co- and contravariant components and the
corresponding transformation laws.
Traditional Denition of Tensors
Arank 0 tensor (eld) is a scalar eld. For any positive integer m , a rank m
_
co-
contra-
_
covari-
ant tensor (eld) consists of an innite collection of sets of N
m
scalar elds
_
T
i
or T
i j
or . . .
T
i
or T
i j
or . . .
_
(with each set corresponding to a different coordinate system) that satisfy the rank m
_
co-
contra-
_
variant transformation laws.
11.6 And What Is This Mysterious Metric That Keeps
Popping Up?
Simply put, the metric is our favorite bilinear form, the dot product. To see this, you must rst
be told that a bilinear form A on a vector space V is a function which maps pairs of vectors into
, and which is linear in each variable. That is, for every two vectors v and w in V , A(v, w)
is a real number, and for every three vectors u , v and w, and every pair of real numbers and
, we have
A(v + w, u) = A(v, u) + A(w, u)
and
A(u, v + w) = A(u, v) + A(u, w) .
Given any two bases B
1
and B
2
for the vector space, it can be shown that there is a matrix
A = A
B
2
,B
1
such that
A(v, w) = v|
B
2
A |w
B
1
.
The components of this matrix are called the components of A with respect to bases B
1
and
B
2
.
version: 11/28/2011
Tensors Chapter & Page: 1113
While we didnt discuss bilinear forms explicitly, there was one we used extensively. That
was the dot product of vectors. Keep this in mind.
Now remember also, that we originally dened the covariant components of the metric, the
g
i j
s , to satisfy
_
ds
dt
_
2
=
dr
dt
dr
dt
=
N
i =1
N
j =1
g
i j
dx
i
dt
dx
j
dt
=
N
i =1
N
j =1
dx
i
dt
g
i j
dx
j
dt
.
Letting v =
dr
dt
this becomes
v v =
N
i =1
N
j =1
g
i j
v
i
v
j
=
_
v
1
v
2
v
N
_
G
_
_
_
_
_
v
1
v
2
.
.
.
v
N
_
_
where [G]
i j
= g
i j
.
Further letting
B
COV
= {
1
,
2
, . . . ,
N
} and B
CON
= {
1
,
2
, . . . ,
N
}
this can be written as
4
v v = v|
B
CON
G|v
B
CON
.
This can be expanded using material developed in the previous several sections. Given two vector
elds v and w, we have
v w =
N
i =1
v
i
w
i
=
N
i =1
v
i
N
j =1
g
i j
w
j
=
N
i =1
N
j =1
v
i
g
i j
w
j
= v|
B
CON
G|w
B
CON
Likewise, you can verify that
v w =
N
i =1
N
j =1
v
i
g
i j
w
j
= v|
B
CON
G
1
|w
B
CON
Now you can pretty well see what the metric really is it is the bilinear form G which
is simply the dot product,
G(v, w) = v w .
The covariant components of the metric are just the components of this bilinear formwith respect
to B
CON
and the contravariant components are just the components with respect to B
COV
. That
is,
G = G
B
CON
,B
CON
and G
1
= G
B
COV
,B
COV
.
Moreover, if you think about it, we also have
G
B
CON
,B
COV
= I = G
B
COV
,B
CON
where, as you should recall, I is the NN identity matrix.
4
WARNING: In this section
v|
B
= |v
B
T
.
This does not quite agree with our convention in previous chapters under which v|
B
is the row matrix of the
components of v with respect to the reciprocal basis.
version: 11/28/2011