0% found this document useful (0 votes)
16 views5 pages

La2 7

The document discusses inner product spaces and their properties. It defines an inner product as a bilinear form on a vector space that is symmetric and positive definite. An inner product space is a vector space equipped with an inner product. The Gram-Schmidt process is introduced as a method for constructing an orthonormal basis from any basis by making successive vectors orthogonal to the preceding ones. Adjoints and orthogonal linear maps on inner product spaces are also discussed.

Uploaded by

Roy Vesey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views5 pages

La2 7

The document discusses inner product spaces and their properties. It defines an inner product as a bilinear form on a vector space that is symmetric and positive definite. An inner product space is a vector space equipped with an inner product. The Gram-Schmidt process is introduced as a method for constructing an orthonormal basis from any basis by making successive vectors orthogonal to the preceding ones. Adjoints and orthogonal linear maps on inner product spaces are also discussed.

Uploaded by

Roy Vesey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

MTH6140 Linear Algebra II

Notes 7 16th December 2010

7 Inner product spaces


Ordinary Euclidean space is a 3-dimensional vector space over R, but it is more than
that: the extra geometric structure (lengths, angles, etc.) can all be derived from a
special kind of bilinear form on the space known as an inner product. We examine
inner product spaces and their linear maps in this chapter. Throughout the chapter,
all vector spaces will be real vector spaces, i.e. the underlying field is always the real
numbers R.

7.1 Inner products and orthonormal bases


Definition 7.1 An inner product on a real vector space V is a function b : V ×V → R
satisfying

• b is bilinear (that is, b is linear in the first variable when the second is kept
constant and vice versa);

• b is symmetric (that is, b(v, w) = b(w, v) for all v, w ∈ V );

• b is positive definite, that is, b(v, v) ≥ 0 for all v ∈ V , and b(v, v) = 0 if and only
if v = 0.

We usually write b(v, w) as v · w. An inner product is sometimes called a dot product


(because of this notation).

Definition 7.2 When a real vector space is equipped with an inner product, we refer
to it as an inner product space.

Geometrically, in a real vector space, we define v · w = |v|.|w| cos θ , where |v| and
|w| are the lengths of v and w, and θ is the angle between v and w. Of course this

1
definition doesn’t work if either v or w is zero, but in this case v · w = 0. But it is much
easier to reverse the process. Given an inner product on V , we define

|v| = v · v

for any vector v ∈ V ; and, if v, w 6= 0, then we define the angle between them to be θ ,
where
v·w
cosθ = .
|v|.|w|

Definition 7.3 A basis (v1 , . . . , vn ) for an inner product space is called orthonormal if
vi · v j = δi j (the Kronecker delta) for 1 ≤ i, j ≤ n.
In other words, vi · vi = 1 for all 1 ≤ i ≤ n, and vi · v j = 0 for i 6= j.

Remark: If vectors v1 , . . . , vn satisfy vi · v j = δi j , then they are necessarily linearly


independent. For suppose that c1 v1 + · · · + cn vn = 0. Taking the inner product of this
equation with vi , we find that ci = 0, for all i.

7.2 The Gram-Schmidt process


Given any basis for V , a constructive method for finding an orthonormal basis is
known as the Gram–Schmidt process.
Let w1 , . . . , wn be any basis for V . The Gram–Schmidt process works as follows.

• Since w1 6= 0, we have w1 · w1 > 0, that is, |w1 | > 0. Put v1 = w1 /|w1 |; then
|v1 | = 1, that is, v1 · v1 = 1.

• For i = 2, . . . , n, let w0i = wi − (v1 · wi )v1 . Then

v1 · w0i = v1 · wi − (v1 · wi )(v1 · v1 ) = 0

for i = 2, . . . , n.

• Now apply the Gram–Schmidt process recursively to (w02 , . . . , w0n ).

Since we replace these vectors by linear combinations of themselves, their inner prod-
ucts with v1 remain zero throughout the process. So if we end up with vectors v2 , . . . , vn ,
then v1 · vi = 0 for i = 2, . . . , n. By induction, we can assume that vi · v j = δi j for
i, j = 2, . . . , n; by what we have said, this holds if i or j is 1 as well.

Definition 7.4 The inner product on Rn for which the standard basis is orthonormal
is called the standard inner product on Rn .

2
Example 7.1 In R3 (with the standard inner product), apply the Gram–Schmidt pro-
cess to the vectors      
1 1 1
w1 = 2 , w2 = 1 , w3 = 0  .
    
2 0 0
We have w1 · w1 = 9, so in the first step we put
 
1/3
v1 = 13 w1 =  2/3  .
2/3

Now v1 · w2 = 1 and v1 · w3 = 13 , so in the second step we find


 
2/3
w02 = w2 − v1 =  1/3  ,
−2/3
 
8/9
w03 = w3 − 13 v1 =  −2/9  .
−2/9

Now we apply Gram–Schmidt recursively to w02 and w03 . We have w02 · w02 = 1, so
 
2/3
v2 = w02 =  1/3  .
−2/3

Then v2 · w03 = 23 , so  
4/9
w003 = w03 − 32 v2 =  −4/9  .
2/9
Finally, w003 · w003 = 49 , so
 
2/3
3
v3 = w003 =  −2/3  .
2
1/3

Check that the three vectors v1 , v2 , v3 we have found really do form an orthonormal
basis.

3
7.3 Adjoints and orthogonal linear maps
Definition 7.5 Let V be an inner product space, and T : V → V a linear map. Then
the adjoint of T is the linear map T ∗ : V → V defined by
v · T ∗ (w) = T (v) · w.
Proposition 7.1 If T : V → V is represented by the matrix A relative to an orthonormal
basis of V , then T ∗ is represented by the transposed matrix A> .
Now we define two important classes of linear maps on V .
Definition 7.6 Let T be a linear map on an inner product space V .
(a) T is self-adjoint if T ∗ = T .
(b) T is orthogonal if it is invertible and T ∗ = T −1 .
We see that, if T is represented by a matrix A (relative to an orthonormal basis),
then
• T is self-adjoint if and only if A is symmetric;
• T is orthogonal if and only if A> A = I, i.e. if and only if A−1 = A> .
We will examine self-adjoint maps (or symmetric matrices) further in the next
chapter. Here we look at orthogonal maps.
Theorem 7.2 The following are equivalent for a linear map T on an inner product
space V :
(a) T is orthogonal;
(b) T preserves the inner product, that is, T (v) · T (v) = v · v;
(c) T maps an orthonormal basis of V to an orthonormal basis.
Proof We have
T (v) · T (w) = v · T ∗ (T (w)),
by the definition of adjoint (see Definition 7.5); so (a) and (b) are equivalent.
Suppose that (v1 , . . . , vn ) is an orthonormal basis, that is, vi · v j = δi j . If (b) holds,
then T (vi ) · T (v j ) = δi j , so that (T (v1 ), . . . , T (vn ) is an orthonormal basis, and (c)
holds. Conversely, suppose that (c) holds, and let v = ∑ xi vi and w = ∑ yi vi for some
orthonormal basis (v1 , . . . , vn ), so that v · w = ∑ xi yi . We have
 
T (v) · T (w) = ∑ xi T (vi ) · ∑ yi T (vi ) = ∑ xi yi ,
since T (vi ) · T (v j ) = δi j by assumption; so (b) holds.

4
Corollary 7.3 T is orthogonal if and only if the columns of the matrix representing T
relative to an orthonormal basis themselves form an orthonormal basis.

Proof The columns of the matrix representing T are just the vectors T (v1 ), . . . , T (vn ),
written in coordinates relative to v1 , . . . , vn . So this follows from the equivalence of (a)
and (c) in the theorem. Alternatively, the condition on columns shows that A> A = I,
where A is the matrix representing T ; so T ∗ T = I, and T is orthogonal.

Example 7.2 Our earlier example of the Gram–Schmidt process produces the ortho-
gonal matrix  
1/3 2/3 2/3
 2/3 1/3 −2/3 
2/3 −2/3 1/3
whose columns are precisely the orthonormal basis we constructed in the example.

You might also like