Notes LinearSystems
Notes LinearSystems
Where the coefficients aij’s, and gi’s are arbitrary functions of t. If every
term gi is constant zero, then the system is said to be homogeneous.
Otherwise, it is a nonhomogeneous system if even one of the g’s is nonzero.
The system (*) is most often given in a shorthand format as a matrix-vector
equation, in the form:
x′ = Ax + g
x′ A x g
x ′ x1 g1
1′ x g
x2 2 2
′ x3 g3
x3
x′ = , x= , g= .
: : :
′ xn g n
x n
For a homogeneous system, g is the zero vector. Hence it has the form
x′ = Ax.
Fact: Every n-th order linear equation is equivalent to a system of n first
order linear equations.
Examples:
x1′ = x2
−k γ F (t )
x2′ = x1 − x2 +
m m m
x1 ′ = x2
x2 ′ = x3
x3 ′ = 4 x1 − 3 x2 + 2 x3
This process can be easily generalized. Given an n-th order linear equation
x1 ′ = x2
x2 ′ = x3
x3 ′ = x4
: : :
: : :
xn−1′ = xn
′ − a0 a a a g (t )
xn = x1 − 1 x2 − 2 x3 − ... − n−1 xn +
an an an an an
4. Rewrite the system you found in (a) Exercise 1, and (b) Exercise 2, into a
matrix-vector equation.
5. Convert the third order linear equation below into a system of 3 first
order equation using (a) the usual substitutions, and (b) substitutions in the
reverse order: x1 = y″, x2 = y′, x3 = y. Deduce the fact that there are multiple
ways to rewrite each n-th order linear equation into a linear system of n
equations.
y″′ + 6y″ + y′ − 2y = 0
Answers:
1. x1 ′ = x 2 2. x1 ′ = x 2
x2′ = −5x1 + 4x2 x2 ′ = x3
x3′ = −9x1 + 5x3 + t cos 2t
3. x1 ′ = x2
x2 ′ = x3
x3 ′ = x4
x4 ′ = 6x1 − 2πx2 + πx3 − 3x4
0 1 0 0
0 1 0 0 1 0
4. (a) x′ = x (b) x′ = x +
− 5 4 − 9 0 5 t cos 2t
Review topics:
1 0 0
1 0 0 1 0
I2 = 0 1 , I3 = , etc.
0 0 1
AI = IA = A
In x = x, x = any n × 1 vector
Properties:
A+0=0+A=A
A0 = 0 = 0A
a b e f a ± e b ± f
c d ± g =
h c ± g d ± h
(ii) Scalar Multiplication
a b ka kb
k = , for any constant k.
c d kc kd
a b e f ae + bg af + bh
c d g =
h ce + dg cf + dh
a b x ax + by
c d y = cx + dy .
a b
det = ad − bc
c d
Note: The determinant is a function whose domain is the set of all
square matrices of a certain size, and whose range is the set of all real
(or complex) numbers.
AB = BA = In,
then the matrix B is called the inverse matrix of A, denoted A−1. The
inverse matrix, if it exists, is unique for each A. A matrix is called
invertible if it has an inverse matrix.
a b
Theorem: For any 2 × 2 matrix A = c d ,
its inverse, if exists, is given by
1 d − b
−1
A = ad − bc − c a .
1 − 2 2 − 3 2 − 2 − 4 − (−3)
(i) 2A − B = 2 −
=
5 2 − 1 4 10 − (−1) 4 − 4
0 − 1
= 11 0
1 − 2 2 − 3 2 + 2 − 3 − 8 4 − 11
(ii) AB = 5 2 − 1 4 = 10 − 2 − 15 + 8 = 8 − 7
On the other hand:
2 − 3 1 − 2 2 − 15 − 4 − 6 − 13 − 10
BA = − 1 4 5 2 = − 1 + 20 2 + 8 = 19 10
1 2 2 1 2 2 1 / 6 1/ 6
−1
(iv) A =
= =
− 5 1 12 − 5 1 − 5 / 12 1 / 12
2 − (−10)
7. Systems of linear equations (also known as linear systems)
If the vector b on the right-hand side is the zero vector, then the
system is called homogeneous. A homogeneous linear system always
has a solution, namely the all-zero solution (that is, the origin). This
solution is called the trivial solution of the system. Therefore, a
homogeneous linear system Ax = 0 could have either exactly one
solution, or infinitely many solutions. There is no other possibility,
since it always has, at least, the trivial solution. If such a system has n
equations and exactly the same number of unknowns, then the number
of solution(s) the system has can be determined, without having to
solve the system, by the determinant of its coefficient matrix:
(A − r I) x = 0.
a b 1 0 a − r b
A − rI =
c d −r 0 1 = c d − r .
Its determinant, set to 0, yields the equation
a − r b
det = (a − r )(d − r ) − bc = r 2 − (a + d )r + (ad − bc) = 0
c d − r
2 3 1 0 2 − r 3
A − rI = − r =
0 1 4
4 3 3 − r .
2 − r 3
det = (2 − r )(3 − r ) − 12 = r 2 − 5r − 6 = (r + 1)(r − 6) = 0
4 3 − r
2 + 1 3 3 3 0
(A − r I) x = (A + I) x = x = x =
4 3 + 1 4 4
0 .
1
k1 = .
− 1
2 − 6 3 − 4 3 0
(A − r I) x = (A − 6 I) x = x = x =
4 3 − 6 4 − 3
0 .
3
k2 = .
4
a 0 a b a 0
0 d , or 0 d , or c d .
Then the eigenvalues are just the main diagonal entries, r = a and d in all 3
examples above.
a − r b
det = r 2 − (a + d )r + (ad − bc) = 0
c d − r
r 2 − Trace(A) r + det(A) = 0.
Note: For any square matrix A, Trace(A) = [sum of all entries on the main
diagonal (running from top-left to bottom-right)]. For a 2 × 2 matrix A,
Trace(A) = a + d.
A short-cut to find eigenvectors (of a 2 × 2 matrix):
We first find the eigenvalue(s) and then write down, for each eigenvalue, the
matrix (A − r I) as usual. Then we take any row of (A − r I) that is not
consisted of entirely zero entries, say it is the row vector (α , β). We put a
minus sign in front of one of the entries, for example, (α , −β). Then an
engenvector of the matrix A is found by switching the two entries in the
above vector, that is, k = (−β , α).
2 3
Example: Previously, we have seen A = 4 3 .
The characteristic equation is
r 2 − Trace(A) r + det(A) = r 2 − 5r − 6 = (r + 1)(r − 6) =0,
3 3
which has roots r = −1 and 6. For r = −1, the matrix (A − r I) is .
4 4
Take the first row, (3, 3), which is a non-zero vector; put a minus sign to the
first entry to get (−3, 3); then switch the entry, we now have k1 = (3, −3). It
is indeed an eigenvector, since it is a nonzero constant multiple of the vector
we found earlier.
On very rare occasions, both rows of the matrix (A − r I) have all zero
entries. If so, the above algorithm will not be able to find an eigenvector.
Instead, under this circumstance any non-zero vector will be an eigenvector.
Exercises:
− 5 − 1 2 0
Let C = 7 3 and D= − 2 − 1 .
1. Compute: (i) C + 2D and (ii) 3C – 5D.
Answers:
− 1 − 1 − 25 − 3
1. (i)
1 31 14
, (ii)
3
− 8 1 − 10 − 2
2. (i) 3 − 1
, (ii)
8 − 3
3. (i) −8, (ii) −2, (iii) 16, (iv) 16
− 3 / 8 − 1 / 8 1 / 2 0 − 3 / 16 − 1 / 16
4. (i)
5 / 8 − 1 − 1 − 1/ 2 − 1/ 2
, (ii) , (iii)
7/8
s s
5. (i) r1 = 2, k1 = ; r2 = −4, k 2 = ; s = any nonzero number
− 7 s − s
s 0
(ii) r1 = 2, k1 = ; r2 = −1, k 2 = ; s = any nonzero number
− 2 s / 3 s
Solution of 2 × 2 systems of first order linear equations
x1 ′ = a x1 + b x2
x2 ′ = c x1 + d x2
a b
x′ = c d x.
Or, in shorthand x′ = Ax, if A is already known from context.
r k e rt = A k e rt.
Since e rt is never zero, we can always divide both sides by e rt and get
r k = A k.
We see that this new equation is exactly the relation that defines eigenvalues
and eigenvectors of the coefficient matrix A. In other words, in order for a
function x = k e rt to satisfy our system of differential equations, the number r
must be an eigenvalue of A, and the vector k must be an eigenvector of A
corresponding to r. Just like the solution of a second order homogeneous
linear equation, there are three possibilities, depending on the number of
distinct, and the type of, eigenvalues the coefficient matrix A has.
The possibilities are that A has
A related note, (from linear algebra,) we know that eigenvectors that each
corresponds to a different eigenvalue are always linearly independent from
each others. Consequently, if r1 and r2 are two different eigenvalues, then
their respective eigenvectors k1 anf k2, and therefore the corresponding
solutions, are always linearly independent.
Case I Distinct real eigenvalues
If the coefficient matrix A has two distinct real eigenvalues r1 and r2, and
their respective eigenvectors are k1 and k2. Then the 2 × 2 system x′ = Ax
has a general solution
x = C1 k1 e r1 t + C2 k2 e r2 t .
2 3
Example: x′ = 4 3 x.
We have already found that the coefficient matrix has eigenvalues
r = −1 and 6. And they each respectively has an eigenvector
1 3
k1 = , k2 = .
− 1 4
Therefore, a general solution of this system of differential equations is
1 −t 3 6 t
x = C1 e + C2 e
− 1 4
3 − 2 1
Example: x′ = 2 − 2 x, x(0) = −1
3 + 1 − 2 4 − 2 0
(A − r I) x = (A + I) x = x = x =
2 − 2 + 1 2 − 1
0 .
1
k1 = ,
2
For r = 2, the system is
3 − 2 −2 1 − 2 0
(A − r I) x = (A − 2 I) x = x = x =
2 − 2 − 2 2 − 4
0 .
2
k2 = .
1
Therefore, a general solution is
1 2 2 t
x = C1 e −t + C2 1 e .
2
That is
C1 + 2C2 = 1
2C1 + C2 = −1 .
1 −t 2 2 t − e − t + 2e 2 t
x= − e + e = −t 2t .
2 1 − 2e + e
Case II Complex conjugate eigenvalues
A little detail: Similar to what we have done before, first there was the
complex-valued general solution in the form
x = C1 k1 e( λ + µi ) t + C2 k2 e( λ − µi ) t .
We “filter out” the imaginary parts by carefully choosing two sets of
coefficients to obtain two corresponding real-valued solutions that are also
linearly independent:
u = eλ t (a cos( µ t ) − b sin( µ t ) )
v = eλ t (a sin( µ t ) + b cos( µ t ) )
The real-valued general solution above is just x = C1 u + C2 v. In particular,
it might be useful to know how u and v could be derived by expanding the
following complex-valued expression (the front half of the complex-valued
general solution):
Then, u is just the real part of this complex-valued function, and v is its
imaginary part.
2 − 5
Example: x′ = 1 − 2 x
Take the first (the one with positive imaginary part) eigenvalue r = i,
and find one of its eigenvectors:
2 − i −5 0
(A − r I) x = x =
1 − 2 − i 0 .
5 5 0
k= = 2 + − 1 i = a + bi
2 − i
a b
− 1 − ( 2 + 3i ) −6 − 3 − 3i − 6 0
(A − r I) x = x = 3 = 0 .
3 5 − ( 2 + 3i ) 3 − 3i
− 1 + i − 1 1
k= = 1 + 0 i = a + bi
1
− 1 1 − 1 1
x = C1 e 2t cos(3t ) − sin(3t ) + C2 e 2 t sin(3t ) + cos(3t )
1 0 1 0
− cos(3t ) − sin(3t ) 2 t cos(3t ) − sin(3t )
= C1 e 2t
+ C2 e
cos( 3t ) sin( 3 t )
Apply the initial values to find C1 and C2:
− 1 1 − 1 1
x(0) = C1 e 0 cos(0) − sin(0) + C2 e 0 sin(0) + cos(0)
1 0 1 0
− 1 1 − C + C2 0
= C1 + C2 = 1 =
1 0 C1 2
Suppose the coefficient matrix A has a repeated real eigenvalues r, there are
2 sub-cases.
(i) If r has two linearly independent eigenvectors k1 and k2. Then the 2 × 2
system x′ = Ax has a general solution
x = C1 k1 e rt + C2 k2 e rt.
Note: For 2 × 2 matrices, this possibility only occurs when the coefficient
matrix A is a scalar multiple of the identity matrix. That is, A has the form
1 0 k 0
k
0 1 = 0 k , for any constant k.
2 0
Example: x′ = 0 2 x.
1 0 2 t
x = C1 e 2t + C2 1 e .
0
(ii) If r, as it usually does, only has one linearly independent eigenvector k.
Then the 2 × 2 system x′ = Ax has a general solution
x = C1 k e rt + C2 (k t e rt + η e rt).
(A − r I ) η = k .
1 − 4 − 2
Example: x′ = 4 − 7 x, x(0) = 1 .
1 + 3 −4 4 − 4 0
(A − r I) x = x = x =
4 − 7 + 3 4 − 4
0 .
Both equations of the system are 4x1 − 4x2 = 0, we get the same
relation x1 = x2. Hence, there is only one linearly independent
eigenvector:
1
k= .
1
Next, solve for η:
4 − 4 1
4 − 4 η = 1 .
1
+ η 2
It has solution in the form η = 4 .
η
2
1 / 4
Choose η2 = 0, we get η = 0 .
Given:
x′ = Ax.
x = C1 k1 e r1 t + C2 k2 e r2 t .
x = C1 k1 e rt + C2 k2 e rt.
(ii) When only one linearly independent eigenvector exist –
x = C1 k e rt + C2 (k t e rt + η e rt).
Note: Solve the system (A − r I) η = k to find the vector η.
Exercises:
8 − 4 − 3 2
3. x′ = x. 4. x′ = x.
1 4 −1 − 5
− 4 0 5
6. x′ = x, x(3) = .
0 − 4 −2
1 4 0
x′ =
3
7. x, x(1) = .
2 3
6 8 8
x′ = x(0) = .
6
8. x,
2 0
6 3 − 1
x′ = x(20) = .
1
9. x,
− 2 − 1
10. For each of the initial value problems #5 through #9, how does the
solution behave as t → ∞?
11. Find the general solution of the system below, and determine the
possible values of α and β such that the initial value problem has a solution
that tends to the zero vector as t → ∞.
− 5 − 1 α
x′ = x, x(0) = .
7 3 β
Answers:
7 −3 t 1 −5 t
1. x = C1 e + C 2 e
− 5 − 1
cos( 3t ) + sin( 3t ) − cos( 3t ) + sin( 3t )
2. x = C1 + C 2
cos( 3t ) sin( 3t )
2 6 t 2 6t 1 6t
3. x = C 1 e + C
2 t e + e
1 1 0
− 4 t cos( t ) − sin( t ) − 4 t cos( t ) + sin( t )
4. x = C 1 e − cos( t ) + C 2 e − sin( t )
−t − 4 cos t − 2 sin t
− +
5e 4 t 12
5. x = e 2 cos t − 4 sin t 6. x = − 4 t + 12
− 2e
2e5t −5 − 2e −t +1 4e 2t + 4e10t
7. x = 5t −5 −t +1 8. x = 10 t
2e + e − 2e + 2e
2t
5e 3t −60 − 6e 4t −80
9. x = 3t −60
− 5e + 4e 4t −80
0
10. For #5 and #6, tlim x(t ) = . For #7, #8, and #9, the limits do not exist,
→∞
0
as x(t) moves infinitely far away from the origin.
1 2t 1 −4t
11. x = C 1 e + C 2 e ; the particular solution will tend to
− 7 − 1
zero as t → ∞ provided that C1 = 0, which can be achieved whenever the
initial condition is such that α = −β (i.e., α + β = 0, including the case α = β =
0).