0% found this document useful (0 votes)
4 views

1993_Wavelet transforms versus Fourier transforms

Uploaded by

fhatnp4c8
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

1993_Wavelet transforms versus Fourier transforms

Uploaded by

fhatnp4c8
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

APPEARED IN BULLETIN OF THE

AMERICAN MATHEMATICAL SOCIETY


Volume 28, Number 2, April 1993, Pages 288-305

WAVELET TRANSFORMS VERSUS FOURIER TRANSFORMS


arXiv:math/9304214v1 [math.NA] 1 Apr 1993

Gilbert Strang

Abstract. This note is a very basic introduction to wavelets. It starts with an


orthogonal basis of piecewise constant functions, constructed by dilation and trans-
lation. The “wavelet transform” maps each f (x) to its coefficients with respect to
this basis. The mathematics is simple and the transform is fast (faster than the
Fast Fourier Transform, which we briefly explain), but approximation by piecewise
constants is poor. To improve this first wavelet, we are led to dilation equations and
their unusual solutions. Higher-order wavelets are constructed, and it is surprisingly
quick to compute with them — always indirectly and recursively.
We comment informally on the contest between these transforms in signal process-
ing, especially for video and image compression (including high-definition television).
So far the Fourier Transform — or its 8 by 8 windowed version, the Discrete Cosine
Transform — is often chosen. But wavelets are already competitive, and they are
ahead for fingerprints. We present a sample of this developing theory.

1. The Haar wavelet


To explain wavelets we start with an example. It has every property we hope
for, except one. If that one defect is accepted, the construction is simple and
the computations are fast. By trying to remove the defect, we are led to dilation
equations and recursively defined functions and a small world of fascinating new
problems — many still unsolved. A sensible person would stop after the first
wavelet, but fortunately mathematics goes on.
The basic example is easier to draw than to describe:

Figure 1. Scaling function φ(x), wavelet W (x), and the next level of
detail.

1991 Mathematics Subject Classification. Primary 42A06, 41A05, 65D05.


Key words and phrases. Wavelets, Fourier transform, dilation, orthogonal basis.
I am grateful to the National Science Foundation (DMS 90-06220) for their support
Received by the editors March 20, 1992 and, in revised form, November 30, 1992

c 1993 American Mathematical Society


0273-0979/93 $1.00 + $.25 per page

1
2 GILBERT STRANG

Already you see the two essential operations: translation and dilation. The step
from W (2x) to W (2x − 1) is translation. The step from W (x) to W (2x) is dilation.
Starting from a single function, the graphs are shifted and compressed. The next
level contains W (4x), W (4x − 1), W (4x − 2), W (4x − 3). Each is supported on an
interval of length 14 . In the end we have Haar’s infinite family of functions:

Wjk (x) = W (2j x − k) (together with φ(x)).

When the range of indices is j ≥ 0 and 0 ≤ k < 2j , these functions form a


remarkable basis for L2 [0, 1]. We extend it below to a basis for L2 (R).
The four functions in Figure 1 are piecewise constant. Every function that is
constant on eachR quarter-interval is a combination of these four. Moreover, the
inner product φ(x) W (x) dx is zero — and so are the other inner products. This
property extends to all j and k: The translations and dilations of W are mutu-
ally orthogonal. We accept this as the definition of a wavelet, although variations
are definitely useful in practice. The goal looks easy enough, but the example is
deceptively simple.
This orthogonal Haar basis is not a recent invention [1]. It is reminiscent of the
Walsh basis in [2] — but the difference is important.† For Walsh and Hadamard,
the last two basis functions are changed to W (2x) ± W (2x − 1). All of their “binary
sinusoids” are supported on the whole interval 0 ≤ x ≤ 1. This global support is
the one drawback to sines and cosines; otherwise, Fourier is virtually unbeatable.
To represent a local function, vanishing outside a short interval of space or time, a
global basis requires extreme cancellation. Reasonable accuracy needs many terms
of the Fourier series. Wavelets give a local basis.
You see the consequences. If the signal f (x) disappears after x = 14 , only a
quarter of the later basis functions are involved. The wavelet expansion directly
reflects the properties of f in physical space, while the Fourier expansion is perfect
in frequency space. Earlier attempts at a “windowed Fourier transform” were ad
hoc — wavelets are a systematic construction of a local basis.
The great value of orthogonality is to make expansion coefficients easy to com-
pute. Suppose the values of f (x), constant on four quarter-intervals, are 9, 1, 2, 0.
Its Haar wavelet expansion expresses this vector y as a combination of the basis
functions:
9 1 1 1 0
         
1 1  1  −1   0 
  = 3 +2 +4 + .
2 1 −1 0 1
0 1 −1 0 −1
The wavelet coefficients bjk are 3, 2, 4, 1; they form the wavelet transform of f .
The connection between the vectors y and b is the matrix W4 , in whose orthogonal
columns you recognize the graphs of Figure 1:

9 1 1 1 0 3
    
1  1 1 −1 02
y = W4 b is  =  .
2 1 −1 0 1 4
0 1 −1 0 −1 1
† Rademacher was first to propose an orthogonal family of ±1 functions; it was not complete.

After Walsh constructed a complete set, Rademacher’s Part II was regrettably unpublished and
seems to be lost (but Schur saw it).
WAVELET TRANSFORMS VERSUS FOURIER TRANSFORMS 3

This is exactly comparable to the Discrete Fourier Transform, in which f (x) =


ak eikx stops after four terms. Now the vector y contains the values of f at four
P
points:
f (0π/2) 1 1 1 1 a0
    
2 3
f (1π/2) 1 i i i   a1 
y = F4 a is  =  .
  
f (2π/2) 1 i2 i4 i6 a2
f (3π/2) 1 i3 i6 i9 a3
This Fourier matrix also has orthogonal columns. The n by n matrix Fn follows

the same pattern, with ω = e2πi/n in place of i = e2πi/4 . Multiplied by 1/ n to
give orthonormal columns, it is the most important of all unitary matrices. The
wavelet matrix sometimes offers modest competition.
To invert a real orthogonal matrix we transpose it. To invert a unitary matrix,
transpose its complex conjugate. After accounting for the factors that enter when
columns are not unit vectors, the inverse matrices are

1 1 1 1 1 1 1 1
   
1 1 1 −1 −1  1  1 (−i) (−i)2 (−i)3 
W4−1 =  and F4−1 =  .
4 2 −2 0 0 4 1 (−i)2 (−i)4 (−i)6

0 0 2 −2 1 (−i)3 (−i)6 (−i)9

The essential point is that the inverse matrices have the same form as the originals.
If we can transform quickly, we can invert quickly — between coefficients and
function values. The Fourier coefficients come from values at n points. The Haar
coefficients come from values on n subintervals.

2. Fast Fourier Transform and Fast Wavelet Transform


The Fourier matrix is full — it has no zero entries. Multiplication of Fn times
a vector a, done directly, requires n2 separate multiplications.
Pn−1 We are evaluating
an n-term Fourier series at n points. The series is 0 ak eikx , and the points are
x = 2πj/n.
The wavelet matrix is sparse — many of its entries are zero. Taken together, the
third and fourth columns of W fill a single column; the fifth, sixth, seventh, and
eighth columns would fill one more column. With n = 2ℓ , we fill only ℓ + 1 columns.
The total number of nonzero entries in Wn is n(ℓ + 1). This already shows the effect
of a more local basis. Multiplication of Wn times a vector b, done directly, requires
only n(log2 n + 1) separate multiplications.
Both of these matrix multiplications can be made faster. For Fn a, this is achieved
by the Fast Fourier Transform — the most valuable numerical algorithm in our
lifetime. It changes n2 to 12 n log2 n by a reorganization of the steps — which is
simply a factorization of the Fourier matrix. A typical calculation with n = 210
changes (1024)(1024) multiplications to (5)(1024). This saving by a factor greater
than 200 is genuine. The result is that the FFT has revolutionized signal processing.
Whole industries are changed from slow to fast by this one idea — which is pure
mathematics.
The wavelet matrix Wn also allows a neat factorization into very sparse matrices.
The operation count drops from O(n log n) all the way to O(n). For our piecewise
constant wavelet the only operations are add and subtract; in fact, W2 is the same
as F2 . Both fast transforms have ℓ = log2 n steps, in the passage from n down to 1.
4 GILBERT STRANG

For the FFT, each step requires 12 n multiplications (as shown below). For the Fast
Wavelet Transform, the cost of each successive step is cut in half. It is a beautiful
“pyramid scheme” created by Burt and Adelson and Mallat and others. The total
cost has a factor 1 + 12 + 41 + · · · that stays below 2. This is why the final outcome
for the FWT is O(n) without the logarithm ℓ.
The matrix factorizations are so simple, especially for n = 4, that it seems
worthwhile to display them. The FFT has two copies of the half-size transform F2
in the middle:

1 1 1 1 1
   
1 i   1 i2 1
(1) F4 =  .
  
1 −1 1 1 1
 
1 −i 1 i2 1

The permutation on the right puts the even a’s (a0 and a2 ) ahead of the odd a’s
(a1 and a3 ). Then come separate half-size transforms on the evens and odds. The
matrix at the left combines these two half-size outputs in a way that produces the
correct full-size answer. By multiplying those three matrices we recover F4 .
The factorization of W4 is a little different:

1 1 1 1 1
   
1 −1 1 1 −1
(2) W4 =  .
 
1 1 1 1
 
1 −1 1 1

At the next level of detail (for W8 ), the same 2 by 2 matrix appears four times in
the left factor. The permutation matrix puts columns 0, 2, 4, 6 of that factor ahead
of 1, 3, 5, 7. The third factor has W4 in one corner and I4 in the other corner (just
as W4 above ends with W2 and I2 — this factorization is the matrix form of the
pyramid algorithm). It is the identity matrices I4 and I2 that save multiplications.
Altogether W2 appears 4 times at the left of W8 , then 2 times at the left of W4 , and
then once at the right. The multiplication count from these n − 1 small matrices is
O(n) — the Holy Grail of complexity theory.
Walsh would have another copy of the 2 by 2 matrix in the last corner, instead
of I2 . Now the product has orthogonal columns with all entries ±1 — the Walsh
basis. Allowing W2 or I2 , W4 or I4 , W8 or I8 , . . . in the third factors, the matrix
products exhibit a whole family of orthogonal bases. This is a wavelet packet, with
great flexibility. Then a “best basis” algorithm aims for a choice that concentrates
most of f into a few basis vectors. That is the goal — to compress information.
The same principle of factorization applies for any power of 2, say n = 1024. For
Fourier, the entries of F are powers of ω = e2πi/1024 . The row and column indices
go from 0 to 1023 instead of 1 to 1024. The zeroth row and column are filled with
ω 0 = 1. The entry in row j, column k of F is ω jk . This is the term ikx
P e jkevaluated
at x = 2πj/1024. The multiplication F1024 a computes the series ak ω for j = 0
to 1023.
The key to the matrix factorization is just this. Squaring the 1024th root of
unity gives the 512th root: (ω 2 )512 = 1. This was the reason behind the middle
factor in (1), where i is the fourth root and i2 is the square root. It is the essential
link between F1024 and F512 . The first stage of the FFT is the great factorization
WAVELET TRANSFORMS VERSUS FOURIER TRANSFORMS 5

rediscovered by Cooley and Tukey (and described in 1805 by Gauss):


   
I512 D512 F512 even-odd
(3) F1024 = .
I512 −D512 F512 shuffle

I512 is the identity matrix. D512 is the diagonal matrix with entries (1, ω, . . . , ω 511 ),
requiring about 512 multiplications. The two copies of F512 in the middle give a
matrix only half full compared to F1024 — here is the crucial saving. The shuffle
separates the incoming vector a into (a0 , a2 , . . . , a1022 ) with even indices and the
odd part (a1 , a3 , . . . , a1023 ).
Equation (3) is an imitation of equation (1), eight levels higher. Both are easily
verified. Computational linear algebra has become a world of matrix factorizations,
and this one is outstanding.
You have anticipated what comes next. Each F512 is reduced in the same way
to two half-size transforms F = F256 . The work is cut in half again, except for an
additional 512 multiplications from the diagonal matrices D = D256 :

I D F even-odd gives
     
 F512   I −D F   0 and 2 mod 4 
(4) = .

F512 I D F even-odd gives
  
I −D F 1 and 3 mod 4

For n = 1024 there are ℓ = 10 levels, and each level has 21 n = 512 multiplications
from the first factor — to reassemble the half-size outputs from the level below.
Those D’s yield the final count 12 nℓ.
In practice, ℓ = log2 n is controlled by splitting the signal into smaller blocks.
With n = 8, the scale length of the transform is closer to the scale length of
most images. This is the short time Fourier transform, which is the transform
of a “windowed” function wf . The multiplier w is the characteristic function of
the window. (Smoothing is necessary! Otherwise this blocking of the image can
be visually unacceptable. The ridges of fingerprints are broken up very badly,
and windowing was unsuccessful in tests by the FBI.) In other applications the
implementation may favor the FFT — theoretical complexity is rarely the whole
story.
A more gradual exposition of the Fourier matrix and the FFT is in the mono-
graphs [3, 4] and the textbooks [5, 6] — and in many other sources [see 7]. (In the
lower level text [8], it is intended more for reference than for teaching. On the other
hand, this is just a matrix–vector multiplication!) FFT codes are freely available
on netlib, and generally each machine has its own special software.
For higher-order wavelets, the FWT still involves many copies of a single small
matrix. The entries of this matrix are coefficients ck from the “dilation equation”.
We move from fast algorithms to a quite different part of mathematics — with the
goal of constructing new orthogonal bases. The basis functions are unusual, for a
good reason.

3. Wavelets by multiresolution analysis


The defect in piecewise constant wavelets is that they are very poor at approx-
imation. Representing a smooth function requires many pieces. For wavelets this
means many levels — the number 2j must be large for an acceptable accuracy. It
6 GILBERT STRANG

is similar to the rectangle rule for integration, or Euler’s method for a differential
equation, or forward differences ∆y/∆x as estimates of dy/dx. Each is a simple and
natural first approach, but inadequate in the end. Through all of scientific comput-
ing runs this common theme: Increase the accuracy at least to second order. What
this means is: Get the linear term right.
For integration, we move to the trapezoidal rule and midpoint rule. For deriva-
tives, second-order accuracy comes with centered differences. The whole point of
Newton’s method for solving f (x) = 0 is to follow the tangent line. All these are
exact when f is linear. For wavelets to be accurate, W (x) and φ(x) need the same
improvement. Every ax + b must be a linear combination of translates.
Piecewise polynomials (splines and finite elements) are often based on the “hat”
function — the integral of Haar’s W (x). But this piecewise linear function does not
produce orthogonal wavelets with a local basis. The requirement of orthogonality to
dilations conflicts strongly with the demand for compact support — so much so that
it was originally doubted whether one function could satisfy both requirements and
still produce ax + b. It was the achievement of Ingrid Daubechies [9] to construct
such a function.
We now outline the construction of wavelets. The reader will understand that
we only touch on parts of the theory and on selected applications. An excellent
account of the history is in [10]. Meyer and Lemarié describe the earliest wavelets
(including Gabor’s). Then comes the beautiful pattern of multiresolution analysis
uncovered by Mallat — which is hidden by the simplicity of the Haar basis. Mallat’s
analysis found expression in the Daubechies wavelets.
Begin on the interval [0, 1]. The space V0 spanned by φ(x) is orthogonal to the
space W0 spanned by W (x). Their sum V1 = V0 ⊕ W0 consists of all piecewise
constant functions on half-intervals. A different basis for V1 is φ(2x) = 21 (φ(x) +
W (x)) and φ(2x − 1) = 12 (φ(x) − W (x)). Notice especially that V0 ⊂ V1 . The
function φ(x) is a combination of φ(2x) and φ(2x−1). This is the dilation equation,
for Haar’s example.
Now extend that pattern to the spaces Vj and Wj of dimension 2j :

Vj = span of the translates φ(2j x − k) for fixed j,


Wj = span of the wavelets W (2j x − k) for fixed j.

The next space V2 is spanned by φ(4x), φ(4x − 1), φ(4x − 2), φ(4x − 3). It contains
all piecewise constant functions on quarter-intervals. That space was also spanned
by the four functions φ(x), W (x), W (2x), W (2x − 1) at the start of this paper.
Therefore, V2 decomposes into V1 and W1 just as V1 decomposes into V0 and W0 :

(5) V2 = V1 ⊕ W1 = V0 ⊕ W0 ⊕ W1 .

At every level, the wavelet space Wj is the “difference” between Vj+1 and Vj :

(6) Vj+1 = Vj ⊕ Wj = V0 ⊕ W0 ⊕ · · · ⊕ Wj .

The translates of wavelets on the right are also translates of scaling functions on
the left. For the construction of wavelets, this offers a totally different approach.
Instead of creating W (x) and the spaces Wj , we can create φ(x) and the spaces
Vj . It is a choice between the terms Wj of an infinite series or their partial sums
WAVELET TRANSFORMS VERSUS FOURIER TRANSFORMS 7

Vj . Historically the constructions began with W (x). Today the constructions begin
with φ(x). It has proved easier to work with sums than differences.
A first step is to change from [0, 1] to the whole line R. The translation index k
is unrestricted. The subspaces Vj and Wj are infinite-dimensional (L2 closures of
translates). One basis for L2 (R) consists of φ(x − k) and Wj k (x) = W (2j x − k)
with j ≥ 0, k ∈ Z. Another basis contains all Wj k with j, k ∈ Z. Then the dilation
index j is also unrestricted — for j = −1 the functions φ(2−1 x − k) are constant
on intervals of length 2. The decomposition into Vj ⊕ Wj continues to hold! The
sequence of closed subspaces Vj has the following basic properties for −∞ < j < ∞:
\ [
Vj ⊂ Vj+1 and Vj = {0} and Vj is dense in L2 (R);
f (x) is in Vj if and only if f (2x) is in Vj+1 ;
V0 has an orthogonal basis of translates φ(x − k), k ∈ Z.

These properties yield a “multiresolution analysis” — the pattern that other wavelets
will follow. Vj will be spanned by φ(2j x − k). Wj will be its orthogonal comple-
ment in Vj+1 . Mallat proved, under mild hypotheses, that Wj is also spanned by
translates [11]; these are the wavelets.
Dilation is built into multiresolution analysis by the property that f (x) ∈ Vj ⇔
f (2x) ∈ Vj+1 . This applies in particular to φ(x). It must be a combination of
translates of φ(2x). That is the hidden pattern, which has become central to this
subject. We have reached the dilation equation.

4. The dilation equation


In the words of [10], “la perspective est complètement changée.” The construction
of wavelets now begins with the scaling function φ. The dilation equation (or
refinement equation or two-scale difference equation) connects φ(x) to translates of
φ(2x):

N
X
(7) φ(x) = ck φ(2x − k).
k=0

The coefficients for Haar are c0 = c1 = 1. The box function φ is the sum of two
half-width boxes. That is equation (7). Then W is a combination of the same
translates (because W0 ⊂ V1 ). The coefficients for W = φ(2x) − φ(2x − 1) are 1
and −1. It is absolutely remarkable that W uses the same coefficients as φ, but in
reverse order and with alternating signs:
1
X
(8) W (x) = (−1)k c1−k φ(2x − k).
1−N

This construction makes W orthogonal to φ and its translates. (For those trans-
lates to be orthogonal to each other, see below.) The key is that every vector
c0 , c1 , c2 , c3 is automatically orthogonal to c3 , −c2 , c1 , −c0 and all even translates
like 0, 0, c3 , −c2 .
When N is odd, c1−k can be replaced in (8) by cN −k . This shift by N − 1 is
even. Then the sum goes from 0 to N and W (x) looks especially attractive.
8 GILBERT STRANG

Everything hinges on the c’s. They dominate all that follows. They determine
(and are determined by) φ, they determine W , and they go into the matrix fac-
torization (2). In the applications, convolution with φ is an averaging operator —
it produces smooth functions (and a blurred picture). Convolution with W is a
differencing operator, which picks out details.
The convolution of the box with itself is the piecewise linear hat function —
equal to 1 at x = 1 and supported on the interval [0, 2]. It satisfies the dilation
equation with c0 = 12 , c1 = 1, c2 = 21 . But there is a condition on the c’s in
order that the wavelet basis W (2j x − k) shall be orthogonal. The three coefficients
1 1
2 , 1, 2 do not satisfy that condition. Daubechies found the unique c0 , c1 , c2 , c3 (four
coefficients are necessary) to give orthogonality plus second-order approximation.
Then the question becomes: How to solve the dilation equation?
Note added in proof. A new construction has just appeared that uses two scal-
ing functions φi and wavelets Wi . Their translates are still orthogonal [38]. The
combination φ1 (x) + φ1 (x − 1) + φ2 (x) is the hat function, so second-order accuracy
is achieved. The remarkable property is that these are “short functions”: φ1 is
supported on [0, 1] and φ2 on [0, 2]. They satisfy a matrix dilation equation.
These short wavelets open new possibilities for application, since the greatest
difficulties are always at boundaries. The success of the finite element method
is largely based on the very local character of its basis functions. Splines have
longer support (and more smoothness), wavelets have even longer support (and
orthogonality). The translates of a long basis function overrun the boundary.
There are two principal methods to solve dilation equations. One is by Fourier
transform, the other is by matrix products. Both
√ give φ as a limit, not as an explicit
function. We never discover the exact value φ( 2). It is amazing to compute with
a function we do not know — but the applications only require the c’s. When
complicated functions come from a simple rule, we know from increasing experience
what to do: Stay with the simple rule.
Solution of the dilation equation by Fourier transform. Without the “2”
we would have an ordinary difference equation — entirely familiar. The presence
of two scales, x and 2x, is the problem. A warning comes from Weierstrass and de
Rham and P Takagi — their nowhere differentiable functions are all built on multiple
scales like an cos(bn x). The Fourier transform easily handles translation by k in
equation (7), but 2x in physical space becomes ξ/2 in frequency space:
     
1X ξ ξ ξ
(9) φ̂(ξ) = ck eikξ/2 φ̂ =P φ̂ .
2 2 2 2
1
ck eikξ . With ξ = 0 in (9) we find P (0) = 1 or
P
The
P “symbol” is P (ξ) = 2
ck = 2 — the first requirement on the c’s. This allows us to look for a solution
R
normalized by φ̂(0) = φ(x) dx = 1. It does not ensure that we find a φ that is
continuous or even in L1 . What we do find is an infinite product, by recursion from
ξ/2 to ξ/4 and onward:
          ∞  
ξ ξ ξ ξ ξ Y ξ
φ̂(ξ) = P φ̂ =P P φ̂ = ··· = P j
.
2 2 2 4 4 j=1
2

This solution φ may be only a distribution. Its smoothness becomes clearer by


matrix methods.
WAVELET TRANSFORMS VERSUS FOURIER TRANSFORMS 9

Solution by matrix products [12, 13]. When φ is known at the integers, the
gives φ at half-integers such as x = 32 . Since 2x − k is an integer,
dilation equation P
we just evaluate ck φ(2x − k). Then the equation gives φ at quarter-integers as
combinations of φ at half-integers. The combinations are built into the entries of
two matrices A and B, and the recursion is taking their products.
To start we need φ at the integers. With N = 3, for example, set x = 1 and
x = 2 in the dilation equation:

φ(1) = c1 φ(1) + c0 φ(2),


(10)
φ(2) = c3 φ(1) + c2 φ(2).

Impose the conditions c1 + c3 = 1 and c0 + c2 = 1. Then the 2 by 2 matrix in (10),


formed from these c’s, has λ = 1 as an eigenvalue. The eigenvector is (φ(1), φ(2)).
It follows from (7) that φ will vanish outside 0 ≤ x ≤ N .
To see the step from integers to half-integers in matrix form, convert the scalar
dilation equation to a first-order equation for the vector v(x):
     
φ(x) c0 0 0 c1 c0 0
v(x) =  φ(x + 1)  , A =  c2 c1 c0  , B =  c3 c2 c1  .
φ(x + 2) 0 c3 c2 0 0 c3

The equation turns out to be v(x) = Av(2x) for 0 ≤ x ≤ 12 and v(x) = Bv(2x − 1)
for 21 ≤ x ≤ 1. By recursion this yields v at any dyadic point — whose binary
expansion is finite. Each 0 or 1 in the expansion decides between A and B. For
example

(11) v(.01001) = (ABAAB)v(0).

Important: The matrix B has entries c2i−j . So does A, when the indexing starts
with i = j = 0. The dilation equation itself is φ = Cφ, with an operator C of this
new kind. Without the 2 it would be a Toeplitz operator, constant along each diag-
onal, but now every other row is removed. Engineers call it “convolution followed
by decimation”. (The word downsampling is also used — possibly a euphemism
for decimation.) Note that the derivative of the dilation equation is φ′ = 2Cφ′ .
Successive derivatives introduce powers of 2. The eigenvalues of these operators C
are 1, 21 , 14 ,P
P . . . , until φ(n) is not defined in the space at hand. The sum condition
ceven = codd = 1 is always imposed — it assures in Condition A1 below that
we have first-order approximation at least.
When x is not a dyadic point p/2n , the recursion in (11) does not termi-
nate. The binary expansion x = .0100101 . . . corresponds to an infinite product
ABAABAB . . . . The convergence of such a product is by no means assured. It is
a major problem to find a direct test on the c’s that is equivalent to convergence —
for matrix products in every order. We briefly describe what is known for arbitrary
A and B.
For a single matrix A, the growth of the powers An is governed by the spectral
radius ρ(A) = max |λi |. Any norm of An is roughly the nth power of this largest
eigenvalue. Taking nth roots makes this precise:

lim kAn k1/n = ρ(A).


n→∞
10 GILBERT STRANG

The powers approach zero if and only if ρ(A) < 1.


For two or more matrices, the same process produces the joint spectral radius
[14]. The powers An are replaced by products Πn of n A’s and B’s. The maximum
of kΠn k, allowing products in all orders, is still submultiplicative. The limit of nth
roots (also the infimum) is the joint spectral radius:

(12) lim (max kΠn k)1/n = ρ(A, B).


n→∞

The difficulty is not to define ρ(A, B) but to compute it. For symmetric or normal or
commuting or upper triangular matrices it is the larger of ρ(A) and ρ(B). Otherwise
eigenvalues of products are not controlled by products of eigenvalues. An example
with zero eigenvalues, ρ(A) = 0 = ρ(B), is
     
0 2 0 0 4 0
A= , B= , AB = .
0 0 2 0 0 0

In this case ρ(A, B) = kABk1/2 = 2. The product ABABAB . . . diverges. In


general ρ is a function of the matrix entries, bounded above by norms and below
by eigenvalues. Since one possible infinite product is a repetition of any particular
Πn (in the example it was AB), the spectral radius of that single matrix gives a
lower bound on the joint radius:

(ρ(Πn ))1/n ≤ ρ(A, B).

A beautiful theorem of Berger and Wang [15] asserts that these eigenvalues of
products yield the same limit (now a supremum) that was approached by norms:

(13) lim sup (max ρ(Πn ))1/n = ρ(A, B).


n→∞

It is conjectured by Lagarias and Wang that equality is reached at a finite product


Πn . Heil and the author noticed a corollary of the Berger-Wang theorem: ρ is a
continuous function of A and B. It is upper-semicontinuous from (12) and lower-
semicontinuous from (13).
Returning to the dilation equation, the matrices A and B share the left eigen-
vector (1, 1, 1). On the complementary subspace, they reduce to
   
′ c0 0 1 − c0 − c3
′ −c0
A = and B = .
−c3 1 − c0 − c3 0 c3

It is ρ(A′ , B ′ ) that decides the size of φ(x)−φ(y). Continuity follows from ρ < 1 [16].
Then φ and W belong to C α for all α less than − log2 ρ. (When α > 1, derivatives
of integer order [α] have Hölder exponent α − [α].) In Sobolev spaces H s , Eirola
and Villemoes [17, 18] showed how an ordinary spectral radius — computable —
gives the exact regularity s.

5. Accuracy and orthogonality


For the Daubechies coefficients, the dilation equation does produce a continuous
φ(x) with Hölder exponent 0.55 (it is differentiable almost everywhere). Then (8)
WAVELET TRANSFORMS VERSUS FOURIER TRANSFORMS 11

Figure 2. The family W4 (2j x − k) is orthogonal. Translates of D4 can


reproduce any ax + b. Daubechies also found D2p with orthogonality
and pth order accuracy.

constructs the wavelet. Figure 2 shows φ and W with c0 , c1 , c2 , c3 = 14 (1 + 3),
1
√ √ √
4 (3 + 3), 14 (3 − 3), 14 (1 − 3).
What is special about the four Daubechies coefficients? They satisfy the require-
ment A2 for second-order accuracy and the separate requirement O for orthogonal-
Rity. We can state R Condition A2 in several forms. In terms of W , the moments
W (x) dx and x W (x) dx are zero. Then the Fourier transform of (8) yields
P (π) = P ′ (π) = 0. In terms of the c’s (or the symbol P (ξ) = 12 ck eikξ ), the
P
condition for accuracy of order p is Ap :
X
(14) (−1)k k m ck = 0 for m < p or equivalently P (ξ + π) = O(|ξ|p ).
This assures that translates of φ reproduce (locally) the powers 1, x, . . . , xp−1 [19].
The zero moments are the orthogonality of these powers to W . Then the Taylor
series of f (x) can be matched to degree p at each meshpoint. The error in wavelet
approximation is of order hp , where h = 2−j is the mesh width or translation step
of the local functions W (2j x). The price for each extra order of accuracy is two
extra coefficients ck — which spreads the support of φ and W by two intervals. A
reasonable compromise is p = 3. The new short wavelets may offer an alternative.
Condition Ap also produces zeros in the infinite product φ̂(ξ) = Π P (ξ/2j ).
Every nonzero integer has the form n = 2j−1 m, m odd. Then φ̂(2πn) has the
12 GILBERT STRANG

factor P (2πn/2j ) = P (mπ) = P (π). Therefore, the pth order zero at ξ = π in


Condition Ap ensures a pth order zero of φ̂ at each ξ = 2πn. This is the test for the
translates of φ to reproduce 1, x, . . . , xp−1 . That step closes the circle and means
approximation to order p. Please forgive this brief recapitulation of an older theory
— the novelty of wavelets is their orthogonality. This is tested by Condition O:
X
(15) ck ck−2m = 2 δ0m or equivalently |P (ξ)|2 + |P (ξ + π)|2 ≡ 1.

The first condition followsPdirectly from (φ(x),


P φ(x − m)) = δ0m . The dilation
equation converts this to ( ck φ(2x − k), cℓ φ(2x − 2m − ℓ)) = δ0m . It is the
“perfect reconstruction condition” of digital signal processing [20–22]. It assures
that the L2 norm is preserved, when the signal f (x) is separated by a low-pass filter
L and a high-pass filter H. The two parts have k Lf k2 + k Hf k2 =k f k2 . A filter
is just a convolution. In frequency space that makes it a multiplication. Low-pass
means that constants and low frequencies survive — we multiply by a symbol P (ξ)
that is near 1 for small |ξ|. High-pass means the opposite, and for wavelets the
multiplier is essentially P (ξ + π). The two convolutions are “mirror filters”.
In the discrete case, the filters L and H (with downsampling to remove every
second row) fit into an orthogonal matrix:

c0 c1 c2 c3
   
 L  1 
c0 c1 c2 c3 
(16) = √  · · · · · · .
   


H
 2 c3 −c2 c1 −c0

c3 −c2 c1 −c0

This matrix enters each step of the wavelet transform, from vector y to wavelet
coefficients b. The pyramid algorithm executes that transform by recursion with
rescaling. We display two steps for a general wavelet and then specifically for Haar
on [0, 1]:
  
L 1 1 1 1
   
H  L  1 1 −1 √  1  1 1 
(17)   H  is √2  √  .
   
 I 2 √ 2 1 −1
2 1 −1

This product is still an orthogonal matrix. When the columns of W4 in §1 are


normalized to be unit vectors, this is its inverse (and its transpose). The recursion
decomposes a function into wavelets, and the reverse algorithm reconstructs it. The
2 by 2 matrix has low-pass coefficients 1, 1 from φ and high-pass coefficients 1, −1
from W . Normalized by 12 , they satisfy Condition O (note eiπ = −1), and they
preserve the ℓ2 norm:
2 2
1 + eiξ 1 + ei(ξ+π)
+ ≡ 1.
2 2

Figure 3 shows how those terms |P (ξ)|2 and |P (ξ + π)|2 are mirror functions that
add to 1. It also shows how four coefficients give a flatter response — with higher
accuracy at ξ = 0. Then |P |2 has a fourth-order zero at ξ = π.
WAVELET TRANSFORMS VERSUS FOURIER TRANSFORMS 13

Figure 3. Condition O for Haar (p = 1) and Daubechies (p = 2).

The design of filters (the choice of convolution) is a central problem of signal


processing — a field of enormous size and importance. The natural formulation is
in frequency space. Its application here is to multirate filters and “subband coding”,
with a sequence of scales 2j x.
Note. Orthogonality of the family φ(x−k) leads by the Poisson summation formula
|φ̂(ξ + 2πn)|2 = 1. Applying the dilation equation (7) and separating even n
P
to
from odd n shows how the second form of Condition O is connected to orthogonality:
X
|φ̂(ξ + 2πn)|2
X ξ 2  2
ξ
= P + πn φ̂ + πn
2 2
  2X  2   2X  2
ξ ξ ξ ξ
= P φ̂ + π2m + P +π φ̂ + π(2m + 1)
2 2 2 2
 2  2
ξ ξ
= P + P +π (= 1 by Condition O).
2 2
The same ideas apply to W . For dilation by 3j or M j instead of 2j , Heller has
constructed [23] the two wavelets or M − 1 wavelets that yield approximation of
PM−1
order p. The orthogonality condition becomes 0 |P (ξ + 2πj/M )|2 = 1.
We note a technical hypothesis that must be added to Condition O. It was
found by Cohen and in a new form by Lawton (see [24, pp. 177–194]). Without
it, c0 = c3 = 1 passes test O. Those coefficients give a stretched box function
φ = 31 χ[0,3] that is not orthogonal to φ(x − 1). The matrix with L and H above will
be only an isometry — it has columns of zeros. The filters satisfy LL∗ = HH ∗ = I
and LH ∗ = HL∗ = 0 but not L∗ L + H ∗ H = I. The extra hypothesis is applied to
this matrix A, or after Fourier transform to the operator A:
N  2    2  
X ξ ξ ξ ξ
Aij = ck cj−2i+k or Af (ξ) = P f + P +π f +π .
0
2 2 2 2
The matrix A with |i| < N and |j| < N has two eigenvectors for λ = 1. Their
components are vm = δ0m and wm = (φ(x), φ(x − m)). Those must be the same!
Then the extra condition, added to O, is that λ = 1 shall be a simple eigenvalue.
14 GILBERT STRANG

In summary, Daubechies used the minimum number 2p of coefficients ck to satisfy


the accuracy condition Ap together with orthogonality. These wavelets furnish
unconditional bases for the key spaces of harmonic analysis (Lp , Hölder, Besov,
Hardy space H 1 , BM O, . . . ). The Haar-Walsh construction fits functions with no
extra smoothness [25]. Higher-order wavelets fit Sobolev spaces, where functions
have derivatives in Lp (see [11, pp. 24–27]). With marginal exponent p = 1 or even
p < 1, the wavelet transform still maps onto the right discrete spaces.

6. The contest: Fourier vs. wavelets


This brief report is included to give some idea of the decisions now being reached
about standards for video compression. The reader will understand that the prac-
tical and financial consequences are very great. Starting from an image in which
each color at each small square (pixel) is assigned a numerical shading between 0
and 255, the goal is to compress all that data to reduce the transmission cost. Since
256 = 28 , we have 8 bits for each of red-green-blue. The bit-rate of transmission
is set by the channel capacity, the compression rule is decided by the filters and
quantizers, and the picture quality is subjective. Standard images are so familiar
that experts know what to look for — like tasting wine or tea.
Think of the problem mathematically. We are given f (x, y, t), with x-y axes
on the TV screen and the image f changing with time t. For digital signals all
variables are discrete, but a continuous function is close — or piecewise continuous
when the image has edges. Probably f changes gradually as the camera moves. We
could treat f as a sequence of still images to compress independently, which seems
inefficient. But the direction of movement is unpredictable, and too much effort
spent on extrapolation is also inefficient. A compromise is to encode every fifth or
tenth image and, between those, to work with the time differences ∆f — which
have less information and can be compressed further.
Fourier methods generally use real transforms (cosines). The picture is broken
into blocks, often 8 by 8. This improvement in the scale length is more important
than the control of log n in the FFT cost. (It may well be more important than
the choice of Fourier.) After twenty years of refinement, the algorithms are still
being fought over and improved. Wavelets are a recent entry, not yet among the
heavyweights. The accuracy test Ap is often set aside in the goal of constructing
“brick wall filters” — whose symbols P (ξ) are near to characteristic functions.
An exact zero-one function in Figure 3 is of course impossible — the designers are
frustrated by a small theorem in mathematics. (Compact support of f and fˆ occurs
only for f ≡ 0.) In any case the Fourier transform of a step function has oscillations
that can murder a pleasing signal — so a compromise is reached.
Orthogonality is not set aside. It is the key constraint. There may be eight or
more bands (8 times 8 in two dimensions) instead of two. Condition O has at least
eight terms |P (ξ + kπ/8)|2 . After applying the convolutions, the energy or entropy
in the high frequencies is usually small and the compression of that part of the
signal is increased — to avoid wasting bits. The actual encoding or “quantization”
is a separate and very subtle problem, mapping the real numbers to {1, . . . , N }.
A vector quantizer is a map from Rd , and the best are not just tensor products
[28]. Its construction is probably more important to a successful compression than
refining the filter.
Audio signals have fewer dimensions and more bands — as many as 512. One
WAVELET TRANSFORMS VERSUS FOURIER TRANSFORMS 15

goal of compression is a smaller CD disk. Auditory information seems to come


in octaves of roughly equal energy — the energy density decays like 1/ξ. Also
physically, the cochlea has several critical bands per octave. (An active problem
in
R audio compression is to use psychoacoustic information about the ear.) Since
dξ/ξ is the same from 1 to 2 and 2 to 4 and 4 to 8 (by a theorem we teach
freshmen!), subband coding stands a good chance.
That is a barely adequate description of a fascinating contest. It is applied
analysis (and maybe free enterprise) at its best. For video compression, the Motion
Picture Experts Group held a competition in Japan late in 1991. About thirty
companies entered algorithms. Most were based on cosine transforms, a few on
wavelets. The best were all windowed Fourier. Wavelets were down the list but
not unhappy. Consolation was freely offered and accepted. The choice for HDTV,
with high definition, may be different from this MPEG standard to send a rougher
picture at a lower bit-rate.
I must emphasize: The real contest is far from over. There are promising wavelets
(Wilson bases and coiflets) that were too recent to enter. Hardware is only begin-
ning to come—the first wavelet chips are available. MPEG did not see the best
that all transforms can do.
In principle, wavelets are better for images, and Fourier is the right choice for
music. Images have sharp edges; music is sinusoidal. The jth Fourier coefficient of a
step function is of order 1/j. The wavelet coefficients (mostly zero) are multiples of
2−j/2 . The L2 error drops exponentially, not polynomially, when N terms are kept.
To confirm this comparison, Donoho took digitized photos of his statistics class.
He discarded 95% of the wavelet and the Fourier coefficients, kept the largest 5%,
and reconstructed two pictures. (The wavelets were “coiflets” [24], with greater
smoothness and symmetry but longer support. Fourier blocks were not tried.)
Every student preferred the picture from wavelets.
The underlying rule for basis functions seems to be this: choose scale lengths that
match the image and allow for spatial variability. Smoothness is visually important,
and D4 is being superseded. Wavelets are not the only possible construction, but
they have opened the door to new bases. In the mathematical contest (perhaps
eventually in the business contest) unconditional bases are the winners.
We close by mentioning fingerprints. The FBI has more than 30 million in
filing cabinets, counting only criminals. Comparing one to thousands of others is
a daunting task. Every improvement leads to new matches and the solution of old
crimes. The images need to be digitized.
The definitive information for matching fingerprints is in the “minutiae” of ridge
endings and bifurcations [29]. At 500 pixels per inch, with 256 levels of gray, each
card has 107 bytes of data. Compression is essential and 20 : 1 is the goal. The
standard from the Joint Photographic Experts Group (JPEG) is Fourier-based,
with 8 by 8 blocks, and the ridges are broken. The competition is now between
wavelet algorithms associated with Los Alamos and Yale [30–33] — fixed basis
versus “best basis”, ℓ < 100 subbands or ℓ > 1000, vector or scalar quantization.
There is also a choice of coding for wavelet coefficients (mostly near zero when the
basis is good). The best wavelets may be biorthogonal — coming from two wavelets
W1 and W2 . This allows a left-right symmetry [24], which is absent in Figure 2.
The fingerprint decision is a true contest in applying pure mathematics.
16 GILBERT STRANG

Acknowledgment
I thank Peter Heller for a long conversation about the MPEG contest and its
rules.

Additional note. After completing this paper I learned, with pleasure and amaze-
ment, that a thesis which I had promised to supervise (“formally”, in the most
informal sense of that word) was to contain the filter design for MIT’s entry in the
HDTV competition. The Ph.D. candidate is Peter Monta. The competition is still
ahead (in 1992). Whether won or lost, I am sure the degree will be granted! These
paragraphs briefly indicate how the standards for High Definition Television aim
to yield a very sharp picture.
The key is high resolution, which requires a higher bit-rate of transmission. For
the MPEG contest in Japan — to compress videos onto CD’s and computers —
the rate was 1 megabit/second. For the HDTV contest that number is closer to 24.
Both compression ratios are about 100 to 1. (The better picture has more pixels.)
The audio signal gets 21 megabit/sec for its four stereo channels; closed captions use
less. In contrast, conventional television has no compression at all — in principle,
you see everything. The color standard was set in 1953, and the black and white
standard about 1941.
The FCC will judge between an AT&T/Zenith entry, two MIT/General Instru-
ments entries, and a partly European entry from Philips and others. These finalists
are all digital, an advance which surprised the New York Times. Monta proposed
a filter that uses seven coefficients or “taps” for low-pass and four for high-pass.
Thus the filters are not mirror images as in wavelets, or brick walls either. Two-
dimensional images come from tensor products of one-dimensional filters. Their
exact coefficients will not be set until the last minute, possibly for secrecy — and
cosine transforms may still be chosen in the end.
The red-green-blue components are converted by a 3 by 3 orthogonal matrix
to better coordinates. Linear algebra enters, literally the spectral theorem. The
luminance axis from the leading eigenvector gives the brightness.
A critical step is motion estimation, to give a quick and close prediction of
successive images. A motion vector is estimated for each region in the image [34].
The system transmits only the difference between predicted and actual images —
the “motion compensated residual”. When that has too much energy, the motion
estimator is disabled and the most recent image is sent. This will be the case when
there is a scene change. Note that coding decisions are based on the energy in
different bands (the size of Fourier coefficients). The L1 norm is probably better.
Other features may be used in 2001.
It is very impressive to see an HDTV image. The final verdict has just been
promised for the spring of 1993. Wavelets will not be in that standard, but they have
no shortage of potential applications [24, 35–37]. A recent one is the LANDSAT
8 satellite, which will locate a grid on the earth with pixel width of 2 yards. The
compression algorithm that does win will use good mathematics.

References
1. A. Haar, Zur Theorie der orthogonalen Funktionensysteme, Math. Ann. 69 (1910), 331–
371.
2. J. L. Walsh, A closed set of normal orthogonal functions, Amer. J. Math. 45 (1923), 5–24.
WAVELET TRANSFORMS VERSUS FOURIER TRANSFORMS 17

3. R. E. Blahut, Fast algorithms for digital signal processing, Addison-Wesley, New York,
1984.
4. C. Van Loan, Computational frameworks for the fast Fourier transform, SIAM, Philadel-
phia, PA, 1992.
5. G. Strang, Introduction to applied mathematics, Wellesley-Cambridge Press, Wellesley,
MA, 1986.
6. W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical recipes,
Cambridge Univ. Press, Cambridge, 2nd ed., 1993.
7. P. Duhamel and M. Vetterli, Fast Fourier transforms : a tutorial review, Signal Processing
19 (1990), 259–299.
8. G. Strang, Introduction to linear algebra, Wellesley–Cambridge Press, Wellesley, MA, 1993.
9. I. Daubechies, Orthogonal bases of compactly supported wavelets, Comm. Pure Appl. Math.
41 (1988), 909–996.
10. P. G. Lemarié (ed.), Les ondelettes en 1989, Lecture Notes in Math., vol. 1438, Springer-
Verlag, New York, 1990.
11. S. Mallat, Multiresolution approximations and wavelet orthogonal bases of L2 (R), Trans.
Amer. Math. Soc. 315 (1989), 69–88; A theory for multiresolution approximation: the
wavelet representation, IEEE Trans. PAMI 11 (1989), 674–693.
12. I. Daubechies and J. Lagarias, Two–scale difference equations: I. Existence and global
regularity of solutions, SIAM J. Math. Anal. 22 (1991), 1388–1410; II. Local regularity,
infinite products of matrices and fractals, SIAM J. Math. Anal. 23 (1992), 1031–1079.
13. , Sets of matrices all infinite products of which converge, Linear Algebra Appl. 161
(1992), 227–263.
14. G.-C. Rota and G. Strang, A note on the joint spectral radius, Kon. Nederl. Akad. Wet.
Proc. A 63 (1960), 379–381.
15. M. Berger and Y. Wang, Bounded semigroups of matrices, Linear Algebra Appl. 166
(1992), 21–28.
16. D. Colella and C. Heil, Characterizations of scaling functions, I. Continuous solutions,
SIAM J. Matrix Anal. Appl. 15 (1994) (to appear).
17. T. Eirola, Sobolev characterization of solutions of dilation equations, SIAM J. Math. Anal.
23 (1992), 1015–1030.
18. L. F. Villemoes, Energy moments in time and frequency for two-scale difference equation
solutions and wavelets, SIAM J. Math. Anal. 23 (1992), 1519–1543.
19. G. Strang, Wavelets and dilation equations : a brief introduction, SIAM Review 31 (1989),
614–627.
20. O. Rioul and M. Vetterli, Wavelets and signal processing, IEEE Signal Processing Mag. 8
(1991), 14–38.
21. M. Vetterli and C. Herley, Wavelets and filter banks: theory and design, IEEE Trans.
Acoust. Speech Signal Process. 40 (1992), 2207–2232.
22. P. P. Vaidyanathan, Multirate digital filters, filterbanks, polyphase networks, and appli-
cations : a tutorial, Proc. IEEE 78 (1990), 56–93; Multirate systems and filter banks,
Prentice-Hall, Englewood Cliffs, NJ, 1993.
23. P. Heller, Regular M -band wavelets, SIAM J. Matrix Anal. Appl. (to appear).
24. I. Daubechies, Ten lectures on wavelets, SIAM, Philadelphia, PA, 1992.
25. F. Schipp, W. R. Wade, and P. Simon, Walsh series, Akad. Kaidó and Adam Hilger,
Budapest and Bristol, 1990.
26. Y. Meyer, Ondelettes et opérateurs, Hermann, Paris, 1990; Wavelets, translation to be
published by Cambridge Univ. Press.
27. R. DeVore and B. J. Lucier, Wavelets, Acta Numerica 1 (1991), 1–56.
28. N. S. Jayant and P. Noll, Digital coding of waveforms, Prentice–Hall, Englewood Cliffs,
NJ, 1984.
29. T. Hopper and F. Preston, Compression of grey-scale fingerprint images, Data Compres-
sion Conference, IEEE Computer Society Press, New York, 1992.
30. M. V. Wickerhauser, High-resolution still picture compression, preprint.
31. M. V. Wickerhauser and R. R. Coifman, Entropy based methods for best basis selection,
IEEE Trans. Inform. Theory 38 (1992), 713–718.
32. R. DeVore, B. Jawerth, and B. J. Lucier, Image compression through wavelet transform
coding, IEEE Trans. Inform. Theory 38 (1992), 719–746.
18 GILBERT STRANG

33. J. N. Bradley and C. Brislawn, Compression of fingerprint data using the wavelet vector
quantization image compression algorithm, Los Alamos Report 92–1507, 1992.
34. J. Lim, Two-dimensional signal and image processing, Prentice-Hall, Englewood Cliffs,
NJ, 1990.
35. G. Beylkin, R. R. Coifman, and V. Rokhlin, Fast wavelet transforms and numerical algo-
rithms, Comm. Pure Appl. Math. 44 (1991), 141–183.
36. C. K. Chui, An introduction to wavelets, Academic Press, New York, 1992.
37. M. B. Ruskai et al., Wavelets and their applications, Jones and Bartlett, Boston, 1992.
38. J. S. Geronimo, D. P. Hardin, and P. R. Massopust, Fractal functions and wavelet expan-
sions based on several scaling functions (to appear).

Department of Mathematics, Massachusetts Institute of Technology, Cambridge,


Massachusetts 02139
E-mail address: [email protected]

You might also like