Fourier Transforms & FFT Algorithm (Paul Heckbert, 1998) by Tantanoid
Fourier Transforms & FFT Algorithm (Paul Heckbert, 1998) by Tantanoid
f (x)eix dx
F()eix d
Think of it as a transformation into a different set of basis functions. The Fourier transform uses complex exponentials (sinusoids) of various frequencies as its basis functions. (Other transforms, such as Z, Laplace, Cosine, Wavelet, and Hartley, use different basis functions). A Fourier transform pair is often written f (x) F(), or F ( f (x)) = F() where F is the Fourier transform operator. If f (x) is thought of as a signal (i.e. input data) then we call F() the signals spectrum. If f is thought of as the impulse response of a lter (which operates on input data to produce output data) then we call F the lters frequency response. (Occasionally the line between whats signal and whats lter becomes blurry). 1
where sinc(x) = sin(x)/(x). For antialiasing with unit-spaced samples, you want the cutoff frequency to equal the Nyquist frequency, so c = .
discrete periodic periodic discrete discrete, periodic discrete, periodic real conjugate symmetric imaginary conjugate antisymmetric box sinc sinc box Gaussian Gaussian impulse constant impulse train impulse train (can you prove the above?) When a signal is scaled up spatially, its spectrum is scaled down in frequency, and vice versa: f (ax) F(/a) for any real, nonzero a.
Convolution Theorem
The Fourier transform of a convolution of two signals is the product of their Fourier transforms: f g FG. The convolution of two continuous signals f and g is ( f g)(x) = So
+ +
f (t)g(x t) dt
f (t)g(x t) dt F()G().
The Fourier transform of a product of two signals is the convolution of their Fourier transforms: f g F G/2.
Delta Functions
The (Dirac) delta function (x) is dened such that (x) = 0 for all x = 0, and for any f (x): ( f )(x) =
+ + (t) dt
= 1,
f (t)(x t) dt = f (x)
The latter is called the sifting property of delta functions. Because convolution with a delta is linear shift-invariant ltering, translating the delta by a will translate the output by a: f (x) (x a) (x) = f (x a) 3
Ak =
n=0
ei N kn an
2
Ak =
n=0
kn WN an
(1)
where
WN = ei N
k and WN for k = 0 . . . N 1 are called the Nth roots of unity. Theyre called this because, in k complex arithmetic, (WN ) N = 1 for all k. Theyre vertices of a regular polygon inscribed in the unit circle of the complex plane, with one vertex at (1, 0). Below are roots of unity for N = 2, N = 4, and N = 8, graphed in the complex plane.
Im
W86 i
W87 W80 1
1 W21 1
W20 Re W42
-i W82
W81 N=8
Powers of roots of unity are periodic with period N, since the Nth roots of unity are points on the complex unit circle every 2/N radians apart, and multiplying by WN is equivN alent to rotation clockwise by this angle. Multiplication by WN is rotation by 2 radians, k+ jN k that is, no rotation at all. In general, WN = WN for all integer j. Thus, when raising WN to a power, the exponent can be taken modulo N. The sequence Ak is the discrete Fourier transform of the sequence an . Each is a sequence of N complex numbers. The sequence an is the inverse discrete Fourier transform of the sequence Ak . The formula for the inverse DFT is N1 1 kn an = WN Ak N k=0 4
The formula is identical except that a and A have exchanged roles, as have k and n. Also, the exponent of W is negated, and there is a 1/N normalization in front.
Ak =
n=0
so
Ak =
n=0
so
A0 = a0 + a1 + a2 + a3 A1 = a0 ia1 a2 + ia3 A2 = a0 a1 + a2 a3 A3 = a0 + ia1 a2 ia3 This can also be written as a matrix multiply: A0 a0 1 1 1 1 A a1 1 1 i 1 i = 1 1 1 1 a 2 A2 1 i 1 i A3 a3 More on this later. To compute A quickly, we can pre-compute common subexpressions: A0 = (a0 + a2 ) + (a1 + a3 ) A1 = (a0 a2 ) i(a1 a3 ) A2 = (a0 + a2 ) (a1 + a3 ) A3 = (a0 a2 ) + i(a1 a3 ) 5
This saves a lot of adds. (Note that each add and multiply here is a complex (not real) operation.) If we use the following diagram for a complex multiply and add:
p q p+q
1 1 1 1
1 i 1 i
a0 a4 a2 a6 a1 a5 a3 a7
1 1 1 1 1 1 1 1
A0 W0 W2 W4 W6 W0 A1 W1 A2 W2 A3 W3 W4 W5 W6 W7 A4 A5 A6 A7
W0 W2 W4 W6
Butteries and Bit-Reversal. The FFT algorithm decomposes the DFT into log2 N stages, each of which consists of N/2 buttery computations. Each buttery takes two complex numbers p and q and computes from them two other numbers, p + q and p q, where is a complex number. Below is a diagram of a buttery operation.
p q pq p+q
In the diagram of the 8-point FFT above, note that the inputs arent in normal order: a0 , a1 , a2 , a3 , a4 , a5 , a6 , a7 , theyre in the bizarre order: a0 , a4 , a2 , a6 , a1 , a5 , a3 , a7. Why this sequence? Below is a table of j and the index of the jth input sample, n j : j 0 1 nj 0 4 j base 2 000 001 n j base 2 000 100 2 3 4 5 6 7 2 6 1 5 3 7 010 011 100 101 110 111 010 110 001 101 011 111
The pattern is obvious if j and n j are written in binary (last two rows of the table). Observe that each n j is the bit-reversal of j. The sequence is also related to breadth-rst traversal of a binary tree. It turns out that this FFT algorithm is simplest if the input array is rearranged to be in bit-reversed order. The re-ordering can be done in one pass through the array a: 7
for j = 0 to N-1 nj = bit_reverse(j) if (j<nj) swap a[j] and a[nj] General FFT and IFFT Algorithm for N = 2r . The previously diagrammed algorithm for the 8-point FFT is easily generalized to any power of two. The input array is bit-reversed, and the buttery coefcients can be seen to have exponents in arithmetic sequence modulo N. For example, for N = 8, the buttery coefcients on the last stage in the diagram are W 0 , W 1 , W 2 , W 3 , W 4 , W 5 , W 6 , W 7 . That is, powers of W in sequence. The coefcients in the previous stage have exponents 0,2,4,6,0,2,4,6, which is equivalent to the sequence 0,2,4,6,8,10,12,14 modulo 8. And the exponents in the rst stage are 1,-1,1,-1,1,-1,1,-1, which is equivalent to W raised to the powers 0,4,0,4,0,4,0,4, and this is equivalent to the exponent sequence 0,4,8,12,16,20,24,28 when taken modulo 8. The width of the butteries (the height of the Xs in the diagram) can be seen to be 1, 2, 4, ... in successive stages, and the butteries are seen to be isolated in the rst stage (groups of 1), then clustered into overlapping groups of 2 in the second stage, groups of 4 in the 3rd stage, etc. The generalization to other powers of two should be evident from the diagrams for N = 4 and N = 8. The inverse FFT (IFFT) is identical to the FFT, except one exchanges the roles of a and A, the signs of all the exponents of W are negated, and theres a division by N at the end. Note that the fast way to compute mod( j, N ) in the C programming language, for N a power of two, is with bit-wise AND: j&(N-1). This is faster than j%N, and it works for positive or negative j, while the latter does not.
Rearranging so that the input array a is bit-reversed and factoring the 8 8 matrix:
A0 W 0 W 0 W 0 W 0 A1 W 0 W 4 W 2 W 6 A2 W 0 W 0 W 4 W 4 A3 W 0 W 4 W 6 W 2 = A4 W 0 W 0 W 0 W 0 A5 W 0 W 4 W 2 W 6 A6 W 0 W 0 W 4 W 4 A7 W 0 W 4 W 6 W 2 1 = 1 W0 1 1 1 1 W4 W 0 W 0 W 0 W 0 a0 W 1 W 5 W 3 W 7 a4 2 W2 W6 W6 a 2 W 3 W7 W1 W5 a 6 W 4 W4 W4 W4 a 1 W W 5 W 1 W 7 W 3 a5 6 W6 W2 W2 a W 3 7 W3 W5 W1 a 7 W W6 1 1 3 W 7 W W0 W4 1 W6 1 W 0 a0 1 W 4 a4 1 W0 a 2 1 W4 a 6 1 W 0 a1 W 2 1 W 4 a5 1 W0 a 3 6 1 W4 a 7 W
W1
1 W2
W2
1 W0 1 1 1 W4
W5
where means 0. These are sparse matrices (lots of zeros), so multiplying by the dense (no zeros) matrix on top is more expensive than multiplying by the three sparse matrices on the bottom. For N = 2r , the factorization would involve r matrices of size N N, each with 2 nonzero entries in each row and column.
Ak =
n=0
kn WN an ,
would require N 2 complex multiplies and adds, which works out to 4N 2 real multiplies and 4N 2 real adds (you can easily check this, using the denition of complex multiplication). The basic computational step of the FFT algorithm is a buttery. Each buttery computes two complex numbers of the form p + q and p q, so it requires one complex multiply ( q) and two complex adds. This works out to 4 real multiplies and 6 real adds per buttery. 9
There are N/2 butteries per stage, and log2 N stages, so that means about 4 N/2 log2 N = 2N log2 N real multiplies and 3N log2 N real adds for an N-point FFT. (There are ways to optimize further, but this is the basic FFT algorithm.) Cost comparison: BRUTE FORCE FFT 4N 2 2N log2 N speedup 16 4 4 64 16 4 256 48 5 4,194,304 20,480 205 1.7 1010 2.1 106 104
The FFT algorithm is a LOT faster for big N. There are also FFT algorithms for N not a power of two. The algorithms are generally fastest when N has many factors, however. An excellent book on the FFT is: E. Oran Brigham, The Fast Fourier Transform, PrenticeHall, Englewood Cliffs, NJ, 1974.
h[x] =
t=0
The convolution theorem says that the Fourier transform of the convolution of two signals is the product of their Fourier transforms: f g FG. The corresponding theorem 10
for discrete signals is that the DFT of the circular convolution of two signals is the product of their DFTs. Computing the convolution with a straightforward algorithm would require N 2 (real) multiplies and adds too expensive! We can do the same computation faster using discrete Fourier transforms. If we compute the DFT of sequence f and the DFT of sequence g, multiply them point-by-point, and then compute the inverse DFT, well get the same answer. This is called Fourier Convolution:
spatial domain frequency domain
G multiply
O(N)
f g IFFT
O(NlogN)
FG
If we use the FFT algorithm, then the two DFTs and the one inverse DFT have a total cost of 6N log2 N real multiplies, and the multiplication of transforms in the frequency domain has a negligible cost of 4N real multiplies. The straightforward algorithm, on the other hand, required N 2 real multiplies. Fourier convolution wins big for large N. Often, circular convolution isnt what you want, but this algorithm can be modied to do standard linear convolution by padding the sequences with zeros appropriately.
Ak,l =
m=0 n=0
km ln WM WN am,n
11
This is the general formula, good for rectangular images whose dimensions are not necessarily powers of two. If you evaluate DFTs of images with this formula, the cost is O(N 4 ) this is way too slow if N is large! But if you exploit the common subexpressions from row to row, or from column to column, you get a speedup to O(N 3 ) (even without using FFT): To compute the Fourier transform of an image, you Compute 1-D DFT of each row, in place. Compute 1-D DFT of each column, in place. Most often, you see people assuming M = N = 2r , but as mentioned previously, there are FFT algorithms for other cases. For an N N picture, N a power of 2, the cost of a 2-D FFT is proportional to N 2 log N. ( Can you derive this? ) Quite a speedup relative to O(N 4 )! Practical issues: For display purposes, you probably want to cyclically translate the picture so that pixel (0,0), which now contains frequency ( x , y ) = (0, 0), moves to the center of the image. And you probably want to display pixel values proportional to log(magnitude) of each complex number (this looks more interesting than just magnitude). For color images, do the above to each of the three channels (R, G, and B) independently. FFTs are also used for synthesis of fractal textures and to create images with a given spectrum.
r(x) = p(x)q(x) =
n=0
f n xn
gn xn
n=0 f 2 x2
= ( f0 + f1 x +
2N2
+ )(g0 + g1 x + g2 x2 + )
= f 0 g0 + ( f 0 g1 + f 1 g0 )x + ( f 0 g2 + f 1 g1 + f 2 g0 )x2 + =
n=0
hn xn
12
where hn = N1 f j gn j , and h = f g. Thus, computing the product of two polynomials j=0 involves the convolution of their coefcient sequences. Extended precision numbers (numbers with hundreds or thousands of signicant gures) are typically stored with a xed number of bits or digits per computer word. This is equivalent to a polynomial where x has a xed value. For storage of 32 bits per word or 9 digits per word, one would use x = 232 or 109 , respectively. Multiplication of extended precision numbers thus involves the multiplication of high-degree polynomials, or convolution of long sequences. When N is small (< 100, say), then straightforward convolution is reasonable, but for large N, it makes sense to compute convolutions using Fourier convolution.
13