Review of Linear Systems Theory
Review of Linear Systems Theory
The following is a (very) brief review of linear systems theory, convolution, and Fourier anal-
ysis. I work primarily with discrete signals, but each result developed in this review has a
parallel in terms of continuous signals and systems. I assume the reader is familiar with lin-
ear algebra (as reviewed in the handout Geometric Review of Linear Algebra), and least squares
estimation (as reviewed in the handout Least Squares Optimization).
A system is linear if (and only if) it obeys the principle of superposition: the response to a
weighted sum of any two inputs is the (same) weighted sum of the responses to each individ-
ual input.
A system is shift-invariant (also called translation-invariant for spatial signals, or time-invariant
for temporal signals) if the response to any input shifted by any amount ∆ is equal to the re-
sponse to the original input shifted by amount ∆.
These two properties are completely independent: a system can have one of them, both or
neither [think of an example of each of the 4 possibilities].
The rest of this review is focused on systems that are both linear and shift-invariant (known as
LSI systems). The diagram below decomposes the behavior of such an LSI system. Consider
an arbitrary discrete input signal. We can rewrite it as a weighted sum of impulses (also called
“delta functions”). Since the system is linear, the response to this weighted sum is just the
weighed sum of responses to each individual impulse. Since the system is shift-invariant, the
response to each impulse is just a shifted copy of the response to the first one. The response to
the impulse located at the origin (position 0) is known as the system’s impulse response.
Putting it all together, the full system response is the weighted sum of shifted copies of the
impulse response. Note that the system is fully characterized by the impulse response: This is
all we need to know in order to compute the response of the system to any input!
To make this explicit, we write an equation that describes this computation:
y[n] = x[m]r[n − m]
m
This operation, by which input x and impulse response r are combined to generate output
signal y is called a convolution. It is often written using a more compact notation: y = x ∗ r.
Although we think of x and r playing very different roles, the operation of convolution is
actually commutative: substituting k = n − m gives:
y[n] = x[n − k]r[k]
k
• Author: Eero Simoncelli, Center for Neural Science, and Courant Institute of Mathematical Sciences, New York
University.
• Thanks to Jonathan Pillow for generating some of the figures.
• Created: Fall 2001. Last revised: October 24, 2019.
• Send corrections or comments to [email protected]
Input
v1 x v1 x
+ v2 x + v2 x
L
+ v3 x + v3 x
+ v4 x + v4 x
Output
2
And finally, convolution is distributive over addition: a ∗ (b + c) = a ∗ b + a ∗ c.
In
Back to LSI systems. The impulse response r2
r is also known as a “convolution kernel” or r3
“linear filter”. Looking back at the definition, r1
each component of the output y is computed +
as an inner product of a chunk of the input
vector x with a reverse-ordered copy of r. As
such, the convolution operation may be visu-
alized as a weighted sum over a window that
slides along the input signal. Out
3
2 Sinusoids and Convolution
The sine function, sin(θ), gives the y-coordinate of the points on a unit circle, as a function of
the angle θ. The cosine function cos(θ), gives the x-coordinate. Thus, sin2 (θ)+cos2 (θ) = 1. The
angle, θ, is (by convention) assumed to be in units of radians, and counter-clockwise relative to
the horizontal axis. A full sweep around the unit circle corresponds to an angle of 2π radians.
Now we consider sinusoidal signals. A discretized sinusoid can be written as: x[n] =
A cos(ωn − φ). Here, n is an integer position within the signal, ω is the frequency of oscil-
lations (in radians per sample), and φ is the phase (in radians).
These sinusoidal functions have a unique behavior with respect to LSI systems. Consider
input signal x[n] = cos(ωn). The response of an LSI system with impuse response r[n] is:
y[k] = r[k]x[n − k]
k
= r[k] cos(ω(n − k))
k
= r[k][cos(ωn) cos(ωk) + sin(ωn) sin(ωk)]
k
= r[k] cos(ωk) cos(ωn) + r[k] sin(ωk) sin(ωn)
k k
where the third line is achieved using the trigonometric identity cos(a − b) = cos(a) cos(b) +
sin(a) sin(b). The two sums (in brackets) are just inner products of the impulse response with
the cosine and sine functions at frequency ω, and we denote them as cr (ω) = k r[k] cos(ωk)
and sr (ω) = k r[k] sin(ωk). If we consider these two values as coordinates of a two-dimensional
vector,
we can rewrite them in polar coordinates by defining vector length (amplitude) Ar (ω) =
cr (ω)2 + sr (ω)2 and vector angle φr (ω) = tan−1 (sr (ω)/cr (ω)). Substituting back into our ex-
pression for the LSI response gives:
where the last line is achieved by using the same trigonometric identity as before (but in the
opposite direction). Thus: The response of an LSI system to a sinusoidal input signal is a si-
nusoid of the same frequency, but (possibly) different amplitude and phase. The amplitude
is multiplied by Ar (ω), and the phase is shifted by φr (ω), both of which are derived from the
system impulse response r[n]. This is true of all LSI systems, and all sinusoidal signals.
Sinusoids as eigenfunctions of LSI systems. The relationship between LSI systems and si-
nusoidal functions may be expressed more compactly (and completely) by bundling together
a sine and cosine function into a single complex exponential:
4
√
where i = −1 is the imaginary number. This rather mysterious relationship (known as
Euler’s Formula) can be derived by expanding the exponential in a Taylor series, and noting
that the even (real) terms form the series expansion of a cosine and the odd (imaginary) terms
form the expansion of a sine.
The use of complex numbers may seem unnecessarily abstract. But it allows changes in the
amplitude and phase of a sinusoid, and thus the responses of an LSI system to a sinusoid, to be
expressed more compactly. Consider input signal x[n] = eiωn . The response of an LSI system
with impuse response r[n] is now:
y[n] = Ar (ω) cos (ωn − φr (ω)) + iAr (ω) sin (ωn − φr (ω))
= Ar (ω)ei(ωn−φr (ω)
= Ar (ω)e−iφr (ω) eiωn
= Ar (ω)e−iφr (ω) x[n].
The action of an LSI system on the complex exponential function is to multiply it by a single
complex number, Ar (ω)e−iφr (ω) . That is, the complex exponentials are eigenfunctions of LSI
systems!
3 Fourier Transform(s)
A collection of sinusoids may be used as a linear basis for representing signals. The transfor-
mation from the standard representation of the signal (eg, as a function of time) to a set of
coefficients representing the amount of each sinusoid needed to create the signal is called the
Fourier Transform (F.T.).
There are really four variants of Fourier transform, depending on whether the signal is contin-
uous or discrete, and on whether its F.T. is continuous or discrete:
5
• The
√ vectors above have a squared norm of N/2 (N for k = 0), so dividing them by N/2
( N for k = 0) would make them unit vectors. The matrix F formed with these normal-
ized vectors as columns would be orthogonal, with an inverse equal to its transpose. But
many definitions/implementations of the DFT choose to normalize the vectors differ-
ently. For example, in matlab, the fft function does not include any normalization factor,
but the inverse (ifft) function then normalizes by 2/N (1/N for k = 0).
• these vectors come in sine/cosine pairs for each frequency except for the first and last
frequencies (k = 0, and k = N/2), for which the sine vector would be zero). If N is odd,
the vector at frequency k = N/2 is omitted.
• Notice that if we included vectors for additional values of k, they would be redundant. In
particular, the vectors associated with any particular k are the same as those for k + mN
for any integer m. That is, the DFT, indexed by k, is periodic with period N . Moreover,
the vectors associated with −k are the same as those associated with k, except that all of
the the sine functions are negated (sine is an anti-symmetric function).
Since this set of N sinusoidal vectors are orthogonal to each other, they span the full input
space, and we can write any vector v as a weighted sum of them:
N/2 N/2−1
v[n] = ak ck [n] + bk sk [n]
k=0 k=1
Since the basis is orthogonal, the Fourier coefficients {ak , bk } may be computed by projecting
the input vector v onto each basis function:
N
−1
ak = v[n]ck [n]
n=0
N
−1
bk = v[n]sk [n]
n=0
N/2
2πk
v[n] = Ak sin n − φk
k=0
N
with amplitudes Ak = a2k + b2k , and phases φk = tan−1 (bk /ak ). These are are referred to as
the Fourier amplitudes (or magnitudes) and Fourier phases of the signal v[n]. Again, this is
6
just a polar coordinate representation of the original values {ak , bk }.
The standard representation of the Fourier coefficients uses a complex-valued number to rep-
resent the amplitude and phase of each frequency component, Ak eiφk . The Fourier amplitudes
7
and phases correspond to the amplitude and phase of this complex number.
impulse
constant
0 0
Stretch (dilation) property. If we stretch the input signal (i.e., rescale the x-axis by a factor
α), the Fourier transform will be compressed by the same factor (i.e., rescale the frequency
axis by 1/alpha). Consider a Gaussian signal. The Fourier amplitude is also a Gaussian, with
standard deviation inversely proportional to that of the original signal.
Shift property. When we shift an input signal, each sinusoid in the Fourier representation
must be shifted. Specifically, shifting by m samples means that the phase of each sinusoid
changes by 2πk
N m, while the amplitude is unchanged. Note that the phase change is different
for each frequency k.
4 Convolution Theorem
The most important property of the Fourier representation is its relationship to LSI systems
and convolution. To see this, we need to combine the eigenvector property of complex expo-
nentials with the Fourier transform. The diagram below illustrates this. Consider applying an
LSI system to an arbitrary signal. Use the Fourier transform to rewrite it as a weighted sum
of sinusoids. The weights in the sum may be expressed as complex numbers, Ak eiφk , repre-
senting the amplitude and phase of each sinusoidal component. Since the system is linear, the
response to this weighted sum is just the weighted sum of responses to each of the individual
sinusoids. But the action of an LSI on a sinusoid with frequency number k will be to multiply
8
Input
v
+ v3 x + AL(3)e iφ L(3) v x
3
Output
the amplitude by a factor Ar (k) and shift the phase by an amount φr (k). Finally, the system
response is a sum of sinusoids with amplitudes/phases corresponding to
Earlier, using a similar sort of diagram, we explained that LSI systems can be characterized
by their impulse response, r[n]. Now we have seen that the complex numbers Ar (k)eiφr (k)
provide an alternative characterization. We now want to find the relationship between these
two characterizations. First, we write an expression for the convolution (response of the LSI
system):
y[n] = r[m]x[n − m]
m
9
This is quite amazing: the DFT of the LSI system response, y[n], is just the product of the DFT
of the input and the DFT of the impulse response! That is, the complex numbers Ar (k)eiφr (k)
correspond to the Fourier transform of the function r[n].
10
computation may be more computationally efficient than direct convolution.
Lowpass Fourier
Filter Spectrum
0
As another example of conceptual simplifica- x
tion, consider an impulse response formed by
the product of a Gaussian function, and a si- 0
nusoid (known as a Gabor function). How
can we visualize the Fourier transform of this
product? We need only compute the convolu-
tion of the Fourier transforms of the Gaussian
and the sinusoid. The Fourier transform of 0
a Gaussian is a Gaussian. The Fourier trans-
form of a sinusoid is an impulse at the corre-
sponding frequency. Thus, the Fourier trans-
form of the Gabor function is a Gaussian cen-
tered at the frequency of the sinusoid.
11