Numerical Linear Algebra With Matlab
Numerical Linear Algebra With Matlab
MATLAB
LINEAR
ALGEBRA
WITH
This document looks at some important concepts in linear and shows how to
solve some basic problems using MATLAB.
Contents
Matrix-Vector products
A rogues gallery of useful matrices
Systems of linear equations
Eigenvectors and Eigenvalues of a matrix
Matrix-Vector products
Lets think about what a matrix-vector product actually does. Its more interesting that you think! Lets take a simple example. Consider the vector [1;0]
multiplied by a matrix [1,1;1,1]
figure(color,[1 1 1])
A=ones(2)
x=[1;0]
Ax=A*x
line([0 x(1)],[0 x(2)]), hold on, grid on
line([0,Ax(1)],[0,Ax(2)],color,[1 0 0])
text(1.1,0,x,fontweight,bold), text(1.1,1.1,A*x,fontweight,bold)
axis([-0.5 1.5 -0.5 1.5])
title(Action of a matrix A on a vector x)
A =
1
1
1
1
x =
1
0
Ax =
1
1
A*x
1
0.5
0.5
0.5
0.5
1.5
We can see that the matrix multiplication resulted in a vector that was rotated
by a certain angle and also stretched by a certain factor. This action (stretching
and rotating) is extremely useful for switching from one co-ordinate system [x,y]
to another [x, y]. Something that you will come across in special relativity
and a number of other areas. To visualise what we mean by a co-ordinate
transformation, consider the basis vectors x:=[1,0] and y:=[0,1], and a point
p:=[0.2,0.1].
figure(color,[1 1 1])
xy=[1,0;0,1]
p=[0.5,0.3,0.1;0.4,0.2,0.0]
A=[-1 1;1 1]
xyp=A*xy; % Co-ordinate rotation step
Ap=A*p;
line([0 xy(1,1)],[0 xy(2,1)]), grid on, hold on
line([0,xy(1,2)],[0,xy(2,2)])
plot(p(1,:),p(2,:),*,markersize,6,color,b)
line([0, xyp(1,1)],[0 xyp(2,1)],color,[1 0 0])
line([0, xyp(2,1)],[0,xyp(2,2)],color,[1 0 0])
plot(Ap(1,:),Ap(2,:),*,markersize,6,color,r)
axis([-1.5 1.5 -1.5 1.5]), axis square
title(Changing the co-ordinate basis with matrix multiplication)
xy =
1
0
0
1
p =
0.5000
0.4000
0.3000
0.2000
0.1000
0
A =
-1
1
1
1
0.5
0.5
1.5
1.5
0.5
0.5
1.5
There are a number of important matrices that are extremely useful, and worth
knowing about. We will use several of these later on in this lecture. Here are a
few:
Identity
Ones/Zeros
Upper/Lower Triangular
Symmetric
Vandermonde
Finite Difference
Rotation
0
1
0
0
0
0
1
0
0
0
0
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Z =
-1.2141
-1.1135
0
0
-0.7697
0.3714
-0.2256
0
-1.0891
0.0326
0.5525
1.1006
1.5442
0.0859
-1.4916
-0.7423
0
2.3505
-0.6156
0.7481
0
0
-0.7648
-1.4023
0
0
0
-0.1961
LT =
Symmetric matrices are matrices that have the same elements above he main
diagonal as below the main diagonal:
A = randn(4);
SYM = A*A
SYM =
3.2298
1.4694
-1.1203
2.6074
1.4694
8.6143
0.2535
0.6552
-1.1203
0.2535
2.4990
-0.1348
2.6074
0.6552
-0.1348
2.6263
0
0.2500
0.5000
0.7500
1.0000
0
0.0625
0.2500
0.5625
1.0000
0
0.0156
0.1250
0.4219
1.0000
0
0.0039
0.0625
0.3164
1.0000
0.8
0.6
0.4
0.2
0.2
0.4
0.6
0.8
Generally for interpolation, its better to use a matrix whose columns are orthogonal polynomials, such as Legendre or Chebyshev, but this is beyond the
remit of this course.
Finite difference matrices are extremely useful. They are matrices that
approximate the derivative of a function on a grid. One way to calculate them
is to fit an interpolating polynomial through the function, and then differentiate
the polynomial to get what is called a finite difference stencil, while the other
is to use a Taylor expansion to approximate the derivatives.
As an illustration, we will use the Taylor expansion.
f (x + x) = f (x) + xf 0(x) +
6
x2 f 00 (x)
+higher order terms
2!
Here the dash represents a derivative with respect to x. If we ignore terms above
the first derivative, we see we can say that
f 0(x)
f (x+x)f (x)
x
f 00(x)
1
x2
(f (x + 2x) 2f (x + x) + f (x))
1
-2
1
0
0
0
1
-2
1
0
0
0
1
-2
1
0
0
0
1
-2
Here we use the construct spdiags to tell MATLAB that the matrix is sparse, i.e.
is composed of mainly zero elements. In fact, this particular type of sparsity
pattern is called tridiagonal. Consider the second row of this matrix multiplied by a vector which is the function evaluated on the grid of points xi ,
[f (x0 ), f (x1 ), f (x2 ), , f (xN )]T . We have the following result:
f (x0 )
f (x1 )
..
.
f (xN )
That is, the result of multiplying the second row of the differentiation matrix
by a vector representing our function on the grid, gives an approximation to the
derivative of our function on the grid. As an example, lets calculate the second
derivative of f (x) = expsin(x) on a grid of 21 points:
N=21; x=linspace(0,2*pi,N); % set up the grid
I=ones(N,1); dx=x(2)-x(1);
D2 = spdiags([I, -2*I, I],[-1:1],N,N)/dx^2; %2nd derivative matrix
f=@(x)(exp(sin(x))) % function
d2 = D2*f(x);
% numerical derivative
d2(1)=1; d2(N)=1;
% fix boundary points
figure(color,[1 1 1])
plot(x,f(x),x,d2), grid on, hold on
d2f=@(x)(cos(x).^2.*exp(sin(x))-sin(x).*exp(sin(x))) % actual second derivative
plot(x,d2f(x),r-.)
title(Approximating a second derivative using a differentiation matrix);
legend(f(x)=exp^{sin(x)},Approximate 2^{nd} deriv,Actual 2^{nd} deriv);
hold off
f =
@(x)(exp(sin(x)))
d2f =
@(x)(cos(x).^2.*exp(sin(x))-sin(x).*exp(sin(x)))
I1 + 25(I1 I2 ) + 50(I1 I3 ) = 10
25(I2 I1 ) + 30I2 + I2 I3 = 0
50(I3 I1 ) + I3 I2 + 55I3 = 0
which, with a little re-arranging becomes:
50I1 I2 + 106I3 = 0
Finally, we can gather all the coefficients into a matrix, and the unknowns into
a vector, to write the single matrix linear equation:
76 25 50
10
I1
25 56
1 I2 = 0
50 1 106
I3
0
In this way we have got the system of equations into the form that we described
above, namely: Ax = b
Of course, for a small system (3 by 3 in this case) we can solve this by hand.
For a larger system, however, we need a computer. The basic approach to
solving such a system is an extension of that used by hand, namely we add and
subtract multiples of one row to one another until we finally end up with an
upper triangular matrix. This action of adding and subtrating multiples of one
row to another is known as a basic row operation. Let us illustrate the process
with MATLAB. Before we begin, we augment the right hand side vector to be
10
-25
56
-1
-50
-1
106
10
0
0
Column: 1
Divide row 1 by 76
Subract -25 times row 1 from row 2
Subract -50 times row 1 from row 3
A =
1.0000
0
0
-0.3289
47.7763
-17.4474
-0.6579
-17.4474
73.1053
0.1316
3.2895
6.5789
Column: 2
Divide row 2 by 47.7763
Subract -17.4474 times row 2 from row 3
A =
1.0000
0
0
-0.3289
1.0000
0
-0.6579
-0.3652
66.7337
0.1316
0.0689
7.7802
Column: 3
Divide row 3 by 66.7337
A =
1.0000
0
0
-0.3289
1.0000
0
-0.6579
-0.3652
1.0000
0.1316
0.0689
0.1166
11
1.0000
0
0
-0.3289
1.0000
0
-0.6579
-0.3652
1.0000
-25
56
-1
-50
-1
106
b =
10
0
0
I =
0.2449
0.1114
0.1166
In the example above, we have performed a special case of what is a more general
idea - that of transforming a matrix into a special form, so that the equations
can be solved more easily. In general, the idea of transforming a matrix through
a series of elementary row operations is a powerful one. May special types of
matrix factorisations exist, and two of the most important examples are:
12
4.9041
g =
9.8081
0.5
1.5
14
A*x
Vector x
0.5
0.5
0.5
0.5
1.5
Essentially, what we are saying is that (square) matrices have certain resonant
modes or preferred co-ordinate directions, which we call eigenvectors of the
matrix. The eigenvectors are orthogonal to one another, and together form an
orthogonal basis. If an N N matrix has N distinct eigenvalues, and correspondingly N orthogonal eigenvectors, then it is said to have Rank N, or to be
full rank.
To make all this more concrete, lets consider an example from the mathematical
theory of waves and vibration. In one dimension, the wave equation can be
written as the following partial differential equation (dont worry about what
this means, exactly, youll learn more about all this later).
2
t2
= c x2
(t)(x)
= c(t)00 (x)
or that
15
(t)
(t)
00
(x)
= c (x)
= k 2
The only way that the left hand side, which is completely independent of x,
can be equal to the right hand side, which is completely independent of t, is if
both sides are equal to a constant, k 2 . We are interested in time independent
solutions (standing waves) for this problem, so we can consider only the spatial
parts of the equation, which can be written as:
2 (x)
x2
= k 2(x)
This is an eigenvalue equation, exactly as we had above, and the solutions we are
looking for are called eigenfunctions. This particular equation is important
in physics and is known as the Helmholtz equation.
Using the second derivative differentiation matrix we described above (actually
a more accurate version of it) and extending the notion to two dimensions, we
transform a differential equation into a matrix eigenvalue equation. We can
then use MATLAB to compute the eigenvalues and eigenvectors numerically to
investigate the standing wave patterns, for example:
chladni
0.1
0
0.1
40
30
20
40
30
10
20
10
0
16