individual scores final grade 1 2 3 4 5 6 7 8 Σ: Final Exam DSP May 24, 2018
individual scores final grade 1 2 3 4 5 6 7 8 Σ: Final Exam DSP May 24, 2018
1 2 3
4.) Linear prediction 5.) Bilinear transform and filter types 6.) Tomlinson-Harashima precoding / equivalent baseband description
a) Assume a discrete autocorrelation of R(0) = 4, R(1) = 3, R(2) = 2. What would be R(−1), R(−2)? Assume that you selected some analog filter type, say, a) Which part of the decision-feedback equalizer is moved to the transmitter in Tomlinson-Harashima
After writing down the Toeplitz matrix, list the steps of the Levinson-Durbin algorithm. (Use row vec- precoding? What is the advantage, what has to be added at the transmitter which was not present in the
tors!) 1 DFE? What is the advantage of this component at the transmitter, what is the disadvantage at the receiver
H(s) = .
b) What do the vectors A and B actually determine? (1 + s/ω0 + s2 /ω20 )(1 + s/ω0 ) in terms of the signal amplitude?
c) Let us now try to get an idea what will happen, if you would start your iterations on the left lower a) Check if the 3-dB edge frequency is at ω = ω0 . b) Assume a 4-QAM signal alphabet. Determine the average power at the transmitter without and with
corner of the matrix. What will an extension by zero from the left or right at the previous solution precoding using the equivalent complex baseband domain.
b) We intend to have a 3-dB edge frequency at Ω3dB = π/4. Modify the transfer function accordingly
vector mean when you extend the matrix size to 2 × 2. Which values are preserved, which ones will using the bilinear transform. c) We like to ensure that we will experience the same power in time domain. What is the expression
be new? What kind of combination of solutions may be suitable to force the right side to zero at the that ensures that the power stays the same when expressing the actual time-domain real signal from the
c) Compare the relation z = esT with the bilinear transform. What are the properties as far as the frequen-
corresponding position? Make use of a component known from the previous step. (This will lead to the complex baseband description. Explain!
cy scales are concerned?
Berlekamp-Massey algorithm used in the decoding of Reed-Solomon codes.)
Solution
Solution Solution
1 a) Feedback filter is moved to the transmitter
√1
a) R(−1) = R(1) = 3, R(−2) = R(2) = 2 (symmetric) (1) a) |H(s = jω0 )| = (1+ j−1)(1+ j) = (1)
2 Advantage: no error propagation (1)
b) ω0 = T2 · tan(Ω/2) with Ω = π/4, thereafter (2) Modulo operation added
4 3 2 2 −1 Limiting the signal amplitude at the transmitter
insert this into the given response and replace s according to the bilinear transform s = T · 1−z
1+z−1
.
R= 3 4 3 Increase amplitude at the receiver (2)
2 3 4 c) z = esT leads to repetition at multiples of 2π = ωs T , corresponding to the sampling rate, whereas the
b) Assuming a signal alphabet at 1+j, 1-j, -1+j, -1-j
bilinear transform compresses (warps) the infinite physical freqeuncy range to the [−π, π) interval. (2)
1·4 = 4 Without: Average power=2 (1)
4 3 4 3 With: Equally distributed in a square x ∈ [−2, 2), y ∈ [−2, 2) (1)
(1 0) = (4 3) (0 1) = (3 4)
3 4 3 4 We simplify to one quadrant only, i.e.,
(1 0) − 34 (0 1) = (1 − 3/4) (0 1) − 34 (1 0) = (−3/4 1) Z 2Z 2 Z 2
1 1 2 x3
(1 − 3/4)
4 2
= (7/4 0) (−3/4 1)
4 2
= (0 7/4) x2 + y2 dxdy = + y2 x dy =
2 4 2 4 2·2 0 0 4 0 3 0
Z 2
4 3 2 4 3 2 1 28 2 1 8 y3 1 16 16
(1 − 3/4 0) 3 4 3 = (7/4 0 − 1/4) (0 − 3/4 1) 3 4 3 = (−1/4 0 7/4) + 2y dy = y+2 = + = 8/3 = 2.6
4 0 3 4 3 3 0 4 3 3
2 3 4 2 3 4
(2)
(1 − 3/4 0) − −1/4
7/4 (0 − 3/4 1) = (1 − 6/7 1/7) (0 − 3/4 1) − −1/4
7/4 (1 − 3/4 0) = (1/7 − 6/7 1) This is greater than the average power of the original constellation, which is 2.
√ jωct }
4 3 2 4 3 2 c) x(t) = 2ℜe{x(t)e
√
(1 − 6/7 1/7) 3 4 3 = (12/7 0 0) (1/7 − 6/7 1) 3 4 3 = (0 0 12/7) The factor 1/ 2 ensures equal power in both domains, since the sinusoidal function resulting form the
2 3 4 2 3 4 real of the exponential will have an average power of 2. (1)
(4)
b) A1 , ..., AN prediction cofficients, A0 addresses the actual current value, B is the mirrored solution
(backward prediction). (2)
c) The boxed component is the preserved one, which is used for the next iteration. The Berlekamp
algorithm can actually jump over singularities, in contrast to the Levinson-Durbin algorithm.
1·2 = 2
3 4 3 4
(1 0) = (3 4) (0 1) = ( 2 3)
2 3 2 3
Two new components! One new component, one preserved!
3
(1 0) − (0 1) = (1 3/2)
2
..
. (2)
4 5 6
7 1 2
2.) Linear prediction, lattice filter 3.) Video coding 4.) Polyphase filter, DFT, FFT
a) In the Levinson-Durbin algorithm and in the design of lattice filters there was a relation between the a) Assume to use a time-continuous Fourier transform, but in 2 dimensions over a square region defined a) The DFT is given by
vectors A, B and fi (n), gi (n), respectively. Express the meaning of the two solutions. (few words!) by 0 ≤ x, y ≤ L. The original-domain function is written as f (x, y). Formulate a 2-dimensional Fourier N 2π
transform. Fk = ∑ fi e j N ik
b) What is hence the relation between A(z) and B(z) (z-domain formulation) and what would be the i=0
relation between the zeros of both polynomials? b) Extend the 2D-function f (x, y) outside the given area in such a way that one can rewrite the 2D-Fourier
If you do not like to interpret this as a transform, how could you interpret the product with the exponential
c) Let A(z) represent a minimum-phase system. What can you say about B(z)? What would be the zero transform into a 2D real transformation.
otherwise? (two words)
locations for both? c) A so-called zone-plate is put in front of a camera or stored as a file. Now, you may see the original
b) How is the relation between the individual filter impulse responses in a polyphase filter bank?
d) The Levinson-Durbin algorithm is known to be problematic for certain correlation matrices. What concentric rings on the screen of your laptop, but “surprisingly” additional rings in vertical and horizontal
direction on a projection. Why is this happening and what does this tell about the spatial resolution of c) The complexity of a Cooley-Tukey FFT is known to be proportional to N log2 (N). Intentionally, the
property makes it fail?
your laptop screen and the projector? logarithm is written with base 2. What does this logarithm stand for, knowing the structure of the FFT?
Solution d) In time direction, one uses so-called I, P, and B frames, where the latter two result from prediction d) Imagine, you would split an audio signal into subbands and would think of quantizing the sub-signals
in one or both directions, respectively. What is the advantage of using prediction, i.e., what is actually with different qualities. Would you recognize quantization effects more at lower or higher frequencies,
a) A and B are forward and backward predictors, respectively. i.e., where should the quantization be more exact?
stored or transmitted for those frames? Why would you, from time to time, still like to have an original I
b) B(z) = z−deg(A(z)) A(z−1 ), zeros mirrored at the unit circle z → 1/z
frame to be stored or transmitted?
c) Min. phase means zeros inside the unit circle, hence B(z) is max. phase with zeros outside the unit Solution
circle. Solution a) Frequency shift
d) Fails when encountering singular submatrices. RL RL
a) F(wx , wy ) = x=0 − jwx x e− jwy y dydx b) Subsampled versions in an interlaced fashion
y=0 f (x, y)e
b) f (−x, y) = f (x, y), f (x, −y) = f (x, y), f (−x, −y) = f (x, y), i.e., make it even in the square −L ≤ c) N = 2n , i.e., n = log2 (N), hence, log2 (N) stages.
x, y ≤ L, zero elsewhere. d) The quantization should be more exact at lower frequencies, since quantization steps would create
RL RL
Then, F(wx , wy ) = 4 · x=0 x=0 f (x, y) cos(wx x) cos(wy y)dydx high-frequency artefacts.
c) Lower sampling “rate” in the projection, both, horizontally and vertically
d) Only the prediction error has to be transmitted or stored. The quantized prediction error will not
perfectly recover the frames and hence there will be some (growing) deviation, which can be reset by I
frames.
3 4 5
5.) IIR filter, Bilinear transform, impulse invariance 6.) Some mixed DSP questions 7.) Single-carrier modulation
The bilinear transform is represented by a frequency transformation a) Under which condition does a DFT represent the z-transform correctly and what are the values of z a) To simplify the average power calculation, for bigger quadratic signal sets a continuous
√ approximation
√
that are computed by the DFT? is often√ √ a uniform distribution over a square of size 2( M −1)a×2( M −1)a,
used. Hence, you assume
2 −1 ωT b) Write the DFT as values of a polynomial. i.e., (− M + 1)a ≤ x, y ≤ ( M − 1)a. Determine the average power integrating over that square. What
ω = · tan(Ω/2) or Ω = 2 · tan
T 2 is the difference to the exact formula PM-QAM = a2 32 (M − 1)?
c) How does the Horner scheme for computing the value of a polynomial work? Is it an FFT algorithm?
and b) We now assume we could place the modulation points equally distributed inside a circle. The area
2 1 − z−1 d) Order the different filter FIR and IIR types in terms of complexity.
s= · shall be the same as under a). Compute the average power again. Which one is lower? This you might
T 1 + z−1 e) Which filter types could lead to limit cycles? already be able to tell without any computation.
f) When multiplying two fixed-point two’s complement numbers of 16 Bit length, how long do you √
We are investigating a Butterworth filter with a squared amplitude response given by c) Show that applying the ℜe in 2ℜe{∑n (an + jbn )g(t − nT )e jωct } leads to two conjugate symmetric
expect the result to be? bands.
1 1
|H( jω)|2 = =⇒ H(s) · H(−s) = 2
1 + ( ωωc )6 1 + (− ωs 2 )3 Solution Solution
c 2π
a) Time limited function, z = e j N a) First assume a 1 × 1 square
a) Determine the poles of H(s) and write H(s). b) Fk = f (x)| − j 2π Z +1/2 Z +1/2
x=e N
b) We would like to have a 3-dB edge frequency at Ωc = π/4. Use the bilinear transform to modify the c) E.g., f (x) = a0 + (a1 + (a2 + a3 x)x)x P2 = x2 + y2 dydx
x=−1/2 y=−1/2
selected filter response to become a digital filter with the same principle characteristic.
d) FIR, Bessel, Butterworth, Tchebyshev, Cauer Z +1/2 +1/2
c) Although we used the same characteristic, which properties or parameters are strictly preserved by the
e) recursive = x2 y + y3 /3 y=−1/2 dx
bilinear transform, which ones are not? x=−1/2
f) 31 or 32 to stick to powers of 2 " #
2 1 3
Z +1/2
d) Now, we think of using the samples of the impulse response of an analog filter to define the digital
= x2 + dx
filter. This is called “Impulse Invariance.” Why would one possibly have issues with aliasing in this case? x=−1/2 3 2
What is the basic underlying theorem? (1 word!) " #+1/2
2 1 3
= x3 /3 + x
Solution 3 2
2 x=−1/2
a) Poles: − ωs 2 = (−1)1/3 = e j(2k+1)π/3 =⇒ sk = ωc e j[π/2+(2k+1)π/6] , k = 0, 1, 2
4 1 3 1
c
1 = =
H(s) = 3 2 6
∏2k=0 (s−sk )
b) ωc = T2 · tan(Ωc /2) = T2 · tan(π/8) = 0.8284 T √ √
Since the length of the square is not 1, but 2( M − 1)a, the power has to be multiplied with 4( M −
1
c) H(z) = 2 2 1−z −1
1)2 a2 , leading to
∏k=0 T · −1 −sk
1+z (M) 1 √ 2 √
d) Sampling P2 = · 4( M − 1)2 a2 = ( M − 1)2 a2
6 3
b)
√
Assume the area to be one first, i.e., πR2 = 1 =⇒ R = 1/ π
Z R R π 1
P◦ = r2 · 2πrdr = 2π r4 /40 = R4 = < P2
0 2 2π
You could have told this already without computation, since the power is a quadratic function and hence,
the corners of the square have a strong influence.
Just as under a), we compute
(M) 1 √ 2 √ (M)
P◦ = · 4( M − 1)2 a2 = ( M − 1)2 a2 < P2
2π π
c)
√ 1 1
2ℜe{∑(an + jbn )g(t −nT )e jωct } = √ ∑ g(t −nT )(an + jbn )e+ jωct + √ ∑ g(t −nT )(an − jbn )e− jωct
n 2 n 2 n
6 7 8
1 2
9
5.) Some mixed questions
a) The Levinson-Durbin algorithm has a direct relation to one of the filter structures - which one? (1)
b) For moving from a low-pass to a high-pass filter, what kind of operation is required for the time domain
samples of the impulse response and which operation for the feed-forward and feedback coefficients?
(2)
c) Describe the relation for coordinates (Cartesian or amplitude/phase) in z and Laplace domain using
The standard mapping of z = esT . What is special about the mapping to Laplace when poles and zeros
are given in z domain. (2)
d) What are advantages and disadvantages of Tomlinson-Harashima precoding? (3 aspects in total)
(3)
e) When will a circuit (analog or digital) oscillate? What can you say about zeros or poles? Tell in
Laplace or z domains. (2)
f) What problem is solved by the overlap-save or overlap-add methods? (1)
g) Do quadrature mirror filters require a strict low-pass / high-pass separation with or without an overlap,
i.e., do they require or not require to fulfill the sampling theorem strictly when reducing the sampling
rate after filtering? (1)
h) The convolution theorem of the DFT is linked to matrix diagonalization with eigenvalue decomposi-
tion. What are the eigenvalues, what are the eigenvectors? (2)
i) What is the further function of a whitening filter apart from noise whitening? Why would one whiten
the noise? (2)
j) You have a Fourier transform pair f (t) ◦––• F( jω). f (t) is time limited to the interval (0,T). Now, we
like to sample in Fourier domain to model a Fourier series which has discrete components in frequency
domain. What should be the sampling frequency interval ∆ f to avoid any artefacts? What artefacts are
meant? (2)
8)
1 2 3
3.) Transversal filter model, DFT 4.) Fourier series/transform, Wavelet 5.) Bilinear transform and filter types
Assume we have the possibility to measure a channel, e.g., a cable, with a vector network analyzer We remember that the Haar wavelet to be defined as a) Explain the direct mapping between z and sT regarding the frequency response. What would happen
using 801 samples. We intend to use an IFFT to determine the samples of an impulse response. You are to zeros and poles? Place two poles and one zero for a low-pass characteristic. Show both in Laplace and
interested in a spectrum up to 10 MHz. Since the characteristic is known to not stop at that frequency, 1 for 0 ≤ x < 1/2 z domain.
but even still rise there, maybe, even have a HP characteristic, you decide for some oversampling to then Ψ(x) = −1 for 1/2 ≤ x < 1
0 otherwise b) In contrast to this, what is the special characteristic of a bilinear transform? What is the drawback
drag the transfer function to zero outside of the 10 MHz range. A typical choice is 4 times oversampling regarding the positions of poles and zeros? Explain, what properties of the shape of a given transfer
relative to the sampling rate given by the sampling theorem. and characteristic can be preserved.
a) What is the (oversampled) sampling rate fs ? Ψ jk = 2 j/2 Ψ(2 j x − k) c) Assume a Bessel filter, which is close to linear phase in the passband, with the following transfer
b) The network analyzer could not measure at DC, but you know that the transfer function there is zero, function in Laplace domain given by
anyhow. You will be adding this to your measurement vector making it 802 values. You like to measure We may define x = t/T .
up to fs /4, i.e., your last sample should be located at fs /4. You like to use a window that reduces the last a) What would you think is the property of the base functions that is the same as for the Fourier se- 1
H(s) = .
sample to zero by a cosine roll-off. This window should cover the range of the last 100 samples. Write ries/Fourier transform base functions? Formulate that property with the Haar wavelets in there to see that 1 + s + 13 s2
this cosine roll-off function dependent on the sample index n. it holds.
Write this in Fourier domain and determine the 3 dB edge frequency (relative to the normalized transfer
c) You now like to apply the IFFT to determine the time-domain counterpart. How long would you b) What would you see as the differences between the Haar wavelet and the base functions in the Fourier
function at DC). (Here, we ignored issues around units)
choose the vector in DFT domain and how will you fill it with the samples of the previous question? transform?
d) Assume that in our digital filter, we like to have edge frequencies at Ω = ±π/4. Rephrase the Bessel
d) After applying the IFFT, is your result real or complex? c) Assume you have a constant voltage v0 . What will be the result from a Fourier series/transform and a
transfer function such that it has the edge frequency at the “corresponding” frequency, i.e., modify the
e) Describe, how you would extract the impulse response and how you could reduce it in length. Wavelet transform?
expression making use of the previously computed edge frequency under b) and the frequency transform
f) In a direct form 2 structure, how do you now choose the ai and bi coefficients? d) What signal property would be highlighted by a Wavelet transform? equations of the bilinear transform.
e) Now formulate the corresponding z-domain transfer function.
Solution: Solution:
R +∞ Reminder:
a) 80 MHz (1) a) Orthogonality, −∞ Ψ jk (x)Ψ j0 k0 (x)dx = 0 (2)
In our formulation, with ω being the physical frequency and Ω being the normalized one in the time-
b) 21 (1 + cos (n − 701) ∗ 100
π
(2) b) Localization together with scaling (1)
discrete system, we have the following relations of the bilinear transform:
c) In a Fourier series, the constant will be represented by c0 = a0 /2 = v0 , in a Fourier transform by
c) h ◦––• [0, H, 0, ..., 0, mirror(H ∗ )] , N = 213 = 8192 (2)
vo 2πδ(ω), with the Haar Wavelet, it will become zero (assumption: no DC component). (2) ωT 2 Ω
d) Real (1) Ω = 2 tan−1 , ω= tan
d) Jumps, i.e., discontinuities or steep transitions (1) 2 T 2
e) Start from the amplitude maximum and search for the beginning and end in both direction in a cyclic
manner, e.g., checking that the amplitudes stay below a certain amplitude. Then drag the ends to zero
tan π/8 = 0.4142
with, e.g., a cosine roll-off. (2)
f) ai = 0, bi = hi (1)
4 5 6
Solution: 6.) Multicarrier modulation 7.) Diverse questions
z
a) z = esT a) Some systems, e.g., VDSL, use so-called pulse shaping and windowing, where the first is at the trans- a) Let F(z) = (z−a)(z−b) . Use partial fraction expansion and determine the time-domain counterpart. We
mitter, the latter at the receiver. This means a longer cyclic extension to allow for a smoother transition assume causality.
Ω between symbols typically in a cosine roll-off / roll-on fashion. What would be the effect, if you are b) What is the drawback of Tomlinson-Harashima precoding when considering the receiver?
thinking of the spectrum around a certain carrier comparing a rectangular window function for pulse
s c) What is the special application for unilateral (one-sided) Laplace or z transforms?
shaping to a smoother cosine-shaped one?
2π d) Why do you expect to have conjugate pairs for poles and/or zeros of a transfer function?
Likewise, you apply windowing in that cosine shape also when you are receiving a symbol, i.e., you use
z a cosine-shaped beginning and end. What is the advantage here when you have a narrowband - think of e) Consider a linear phase transversal filter. Why does a segment of a cascade structure combine 4 zeros?
sinusoidal - disturber (e.g., an amateur radio transmitter)? Which ones are those?
π f) How can you consider poles and zeros outside a unit circle in z-domain as stable and minimum phase?
b) Think of using single-carrier modulation instead, but still a cyclic prefix to make the linear channel
convolution to appear cyclic. How would you place the blocks IFFT, FFT, CP, CP elimination, and ZF g) What is the change required in the impulse response or the filter coefficients to transform a filter from
σT frequency-domain equalization? LP to HP or vice versa?
c) What is the voltage distribution in time domain as the consequence of having random signals in DFT h) Which structure, DFE or noise-canceling provides more flexibility?
domain. Distinguish the real time domain function (Discrete MultiTone) and the complex one (OFDM). i) Why can you reduce the number of samples in chrominance compared to luminance in video coding
−π For the latter, of course, you will obtain a 2-dimensional distribution, since you have a complex baseband without visible quality deficiencies?
signal. What kind of distributions do you expect for both cases. j) In a quadrature mirror filter for a LP/HP separation, will you accept some aliasing? If yes, will this
d) Now think of a one-dimensional antipodal baseband signaling just using ±1. What is the amplitude lead to disturbances that cannot be eliminated again?
−2π distribution? How will it look like after having some filter function with, e.g., more than 10 coefficients? k) When you put the zone plate with its rings and increasing spatial frequency to the outside in front of
a camera you see multiple rings in vertical direction. Why? When would you also see multiple rings in
(1+2=3) Solution: horizontal direction.
a) Pulse shaping reduces disturbance into protected intervals due to faster decay of spectral sidelobes. l) What was the change in the signal to obtain a Discrete Cosine Transform?
b) Edge freq. preserved, other warping of freq. axis, zero and pole positions not preserved, but principle Windowing reduces interference from around the narrowband disturber frequency. (2)
ripple behavior is kept. (2) b) SC transmitter −→ CP −→ Channel −→ CP elimination −→ FFT −→ ZF FEQ −→ IFFT (2)
1
c) H(s) = 1+s+ 1 2 c) 1D Gaussian for DMT, 2D Gaussian with Raleigh amplitude distribution for OFDM. (2)
3s
2
d) Two peaks at ±1, after filter more Gaussian (2)
2 2 !
|H( jω0 )|2 = 1+ jω 1− 1 ω2 = 3+ j3ω3 −ω2 = (3−ω23)2 +9ω2 = 21
(1)
0 3 0 0 0 0 0
7 8 9
1 2
3.) Sampling-rate conversion / filter banks 4.) Linear prediction 5.) Bilinear transform and filter types
a) Assume you have a frequency band being used given by 3ω0 < ω < 5ω0 . You like to suppress the a) Assume a discrete autocorrelation of R(0) = 4, R(1) = 3, R(2) = 2. What would be R(−1), R(−2)? Assume that you selected some analog filter type, say,
carrier frequency at 4ω0 using a HP filter. To this end, you will be able to process the signal by mo- After writing down the Toeplitz matrix, list the steps of the Levinson-Durbin algorithm. (Use row vec-
dulation and/or demodulation using e± jnω0t and choosing a sampling rate and/or doing sampling-rate tors!) 1
H(s) = .
conversion(s). Show a corresponding block diagram. b) What do the vectors A and B actually determine? (1 + s/ω0 + s2 /ω20 )(1 + s/ω0 )
b) In a polyphase filter bank, one will also do similar operations, but using a prototype low-pass filter c) Let us now try to get an idea what will happen, if you would start your iterations on the left lower a) Check if the 3-dB edge frequency is at ω = ω0 .
(not a highpass). How will the modulation and/or demodulation e± jnω0t be realized and where would corner of the matrix. What will an extension by zero from the left or right at the previous solution b) We intend to have a 3-dB edge frequency at Ω3dB = π/4. Modify the transfer function accordingly
you find those operations? vector mean when you extend the matrix size to 2 × 2. Which values are preserved, which ones will using the bilinear transform.
c) Does a quadrature mirror filter strictly obey the sampling theorem? If yes, due to filter slopes, will be new? What kind of combination of solutions may be suitable to force the right side to zero at the
c) Compare the relation z = esT with the bilinear transform. What are the properties as far as the frequen-
spectral components between low-pass and high-pass band be lost? If no, would aliasing destroy spectral corresponding position? Make use of a component known from the previous step. (This will lead to the
cy scales are concerned?
components? Berlekamp-Massey algorithm used in the decoding of Reed-Solomon codes.)
Solution Solution
Solution
1 √1
a) |H(s = jω0 )| = (1+ j−1)(1+ j) = (1)
a) −→ ·e− j4ω0t −→ (sub)sampling at 2ω0 −→ HP −→ Upsampling (zeropadding & LP filter) to reach a) R(−1) = R(1) = 3, R(−2) = R(2) = 2 (symmetric) (1) 2
10ω0 −→ ·e+ j4ω0t −→ b) ω0 = T2 · tan(Ω/2) with Ω = π/4, thereafter (2)
4 3 2 2 −1
b) Modulation, demodulation realized by DFT (FFT), located after the filters in analysis filter bank and insert this into the given response and replace s according to the bilinear transform s = T · 1−z
1+z−1
.
before the filters in synthesis filter bank. R= 3 4 3
2 3 4 c) z = esT leads to repetition at multiples of 2π = ωs T , corresponding to the sampling rate, whereas the
c) It does not obey the sampling theorem, but the slopes fulfill a certain symmetry that finally avoids bilinear transform compresses (warps) the infinite physical freqeuncy range to the [−π, π) interval. (2)
aliasing effects and also losses. 1·4 = 4
4 3 4 3
(1 0) = (4 3) (0 1) = (3 4)
3 4 3 4
(1 0) − 34 (0 1) = (1 − 3/4) (0 1) − 34 (1 0) = (−3/4 1)
4 2 4 2
(1 − 3/4) = (7/4 0) (−3/4 1) = (0 7/4)
2 4 2 4
4 3 2 4 3 2
(1 − 3/4 0) 3 4 3 = (7/4 0 − 1/4) (0 − 3/4 1) 3 4 3 = (−1/4 0 7/4)
2 3 4 2 3 4
(1 − 3/4 0) − −1/4
7/4 (0 − 3/4 1) = (1 − 6/7 1/7) (0 − 3/4 1) − −1/4
7/4 (1 − 3/4 0) = (1/7 − 6/7 1)
4 3 2 4 3 2
(1 − 6/7 1/7) 3 4 3 = (12/7 0 0) (1/7 − 6/7 1) 3 4 3 = (0 0 12/7)
2 3 4 2 3 4
(4)
b) A1 , ..., AN prediction cofficients, A0 addresses the actual current value, B is the mirrored solution
(backward prediction). (2)
c) The boxed component is the preserved one, which is used for the next iteration. The Berlekamp
algorithm can actually jump over singularities, in contrast to the Levinson-Durbin algorithm.
1·2 = 2
3 4 3 4
(1 0) = (3 4) (0 1) = ( 2 3)
2 3 2 3
Two new components! One new component, one preserved!
3
(1 0) − (0 1) = (1 3/2)
2
..
. (2)
3 4 5
6.) Tomlinson-Harashima precoding / equivalent baseband description 7.) Single-carrier / multicarrier modulation Final Exam DSP
a) Which part of the decision-feedback equalizer is moved to the transmitter in Tomlinson-Harashima There is a proof that under ideal assumptions the performance of single-carrier and multicarrier modu- May 24, 2018
precoding? What is the advantage, what has to be added at the transmitter which was not present in the lation is the same. From information theory, one knows that one should do water filling to adjust the
DFE? What is the advantage of this component at the transmitter, what is the disadvantage at the receiver power on individual channels (power loading) and also, one should use bit-loading to adjust the signal Prof. Dr.-Ing. W. Henkel
in terms of the signal amplitude? alphabet size according to the individual channel capacities in multicarrier. The underlying assumption
b) Assume a 4-QAM signal alphabet. Determine the average power at the transmitter without and with is the independence of the channels in multicarrier transmission.
precoding using the equivalent complex baseband domain. a) How is the independence of channels realized in multicarrier modulation?
c) We like to ensure that we will experience the same power in time domain. What is the expression b) How is ISI / ICI (Inter-Symbol / Inter-Carrier Interference) avoided in baseband, single-carrier, mul- ................................................. ................................................. .................................................
that ensures that the power stays the same when expressing the actual time-domain real signal from the ticarrier modulation? Are there relations? Name Immatriculation No. Signature
complex baseband description. Explain! c) Can you imagine what could be seen as a kind of counterpart for power-loading in baseband or single-
carrier modulation to be able to adjust to the channel at the transmitter side? individual scores final
Solution d) At the beginning of the multi-carrier era, the main argument was that the equalization should be much grade
a) Feedback filter is moved to the transmitter easier in multicarrier transmission, assuming that the cyclic prefix is long enough. What is the order of 1 2 3 4 5 6 7 8 Σ
Advantage: no error propagation (1) the complexity of a linear equalizer in baseband or single-carrier transmission, what is it for equalization
Modulo operation added in multicarrier transmission? Think about, how the equalizer would work, where it is placed.
Limiting the signal amplitude at the transmitter
Increase amplitude at the receiver (2) Solution
b) Assuming a signal alphabet at 1+j, 1-j, -1+j, -1-j a) By preprocessing with FH = F−1 , an IDFT (IFFT) and postprocessing with F, together with knowing
Without: Average power=2 (1) 1.) FIR/IIR filter
that the channel Toeplitz matrix can be written as FH TF. All together, this would lead to
With: Equally distributed in a square x ∈ [−2, 2), y ∈ [−2, 2) (1) We do a measurement of some baseband channel using a vector network analyzer. That analyzer can only
We simplify to one quadrant only, i.e., y = F |FH{zTF} FH , do 2001 measurement points and the lowest possible measurement frequency is 30 kHz and we liked to
D measure up to 100 MHz. The zero frequency we cannot measure. For simplicity, we might set the value
Z 2Z 2 Z 2
there to zero, assuming we had an inductive coupling with a Balun (transformer), which suppresses DC,
1 1 2 x3
x2 + y2 dxdy = + y2 x dy = where D is a diagonal matrix. anyhow.
2·2 0 0 4 0 3 0
b) ISI in baseband and single-carrier modulation is realized by a fulfilling the Nyquist criterion, leading Up to 100 MHz, we like to keep the frequency response unaltered, but we are allowed to measure, e.g.,
Z 2
1 28 2 1 8 y3 1 16 16 to a raised cosine roll-off. For a roll-off of zero, one would obtain a sinc-function as impulse reponse, up to 110 MHz.
+ 2y dy = y+2 = + = 8/3 = 2.6 with zeros at the neighboring time instants, i.e., no ISI. For multicarrier modulation with the rectangu-
4 0 3 4 3 3 0 4 3 3 a) What is the frequency spacing of your measurements?
lar window in time domain, the spectra on every carrier show a sinc shape, as well, with zeros at the
(2) b) Now, we think of 4-times oversampling relative to the user spectrum width of 100 MHz (one-sided).
neighboring spectral lines, i.e., no ICI.
This is greater than the average power of the original constellation, which is 2. What will then be the sampling frequency?
√ c) Counterpart of powerloading: precoding
c) x(t) = 2ℜe{x(t)e jωct } c) Design a window with a raised cosine roll-off, where the roll-off is placed between 100 and 110 MHz.
√ d) Oder of linear equalizer: L2 , for L coefficients. Order of FEQ in multicarrier modulation N, but addi-
The factor 1/ 2 ensures equal power in both domains, since the sinusoidal function resulting form the Write this window dependent on f . Reformulate this in a discrete form given the 2001 frequency points.
tionally N log N from FFT.
real of the exponential will have an average power of 2. (1) d) Now, for determining the corresponding impulse response, we like to apply an IFFT (Cooley-Tukey
8.) FFT type). Determine the content of the vector of length N before applying the IFFT. This means, choose a
suitable N first and then describe the contents of the vector.
FFT algorithms are based on a “decimation” in time or frequency. What property is required as far as the
length N is concerned, i.e., are there lengths, which definitely do not allow for such an FFT algorithm? e) After the IFFT, how do you extract the impulse response? (two words!)
What is the requirement for N for the Cooley-Tukey algorithm? How many stages would you expect for f) The impulse response is too long. How would you shorten it? (one word!)
a length of 243 (not using Cooley-Tukey!)? What is the decimation used in this case? g) Imagine now that you can approximate the frequency response by an IIR filter. The transfer function
is then F(z) = B(z)
A(z) . You would like to shorten the impulse response and thereby make it finite in length
Solution such that it is given by the degree of B(z) + 1. Which FIR filter could you apply to shorten the impulse
Prime lengths do not allow for an FFT. response as desired?
For Cooley-Tukey: N = 2n , (This is a possibility for time-domain equalization for multicarrier modulation, however, not a well-
243 = 35 , i.e., 5 stages. performing one.)
6 7 1
2 3 4
4.) Polyphase filter, DFT, FFT 5.) IIR filter, Bilinear transform, impulse invariance 6.) Some mixed DSP questions
a) The DFT is given by The bilinear transform is represented by a frequency transformation a) Under which condition does a DFT represent the z-transform correctly and what are the values of z
N 2π that are computed by the DFT?
Fk = ∑ fi e j N ik 2 ωT
−1 b) Write the DFT as values of a polynomial.
i=0 ω = · tan(Ω/2) or Ω = 2 · tan
T 2
If you do not like to interpret this as a transform, how could you interpret the product with the exponential c) How does the Horner scheme for computing the value of a polynomial work? Is it an FFT algorithm?
and
otherwise? (two words) 2 1 − z−1 d) Order the different filter FIR and IIR types in terms of complexity.
s= ·
b) How is the relation between the individual filter impulse responses in a polyphase filter bank? T 1 + z−1 e) Which filter types could lead to limit cycles?
c) The complexity of a Cooley-Tukey FFT is known to be proportional to N log2 (N). Intentionally, the We are investigating a Butterworth filter with a squared amplitude response given by f) When multiplying two fixed-point two’s complement numbers of 16 Bit length, how long do you
logarithm is written with base 2. What does this logarithm stand for, knowing the structure of the FFT? expect the result to be?
d) Imagine, you would split an audio signal into subbands and would think of quantizing the sub-signals 1 1
|H( jω)|2 = =⇒ H(s) · H(−s) = 2
with different qualities. Would you recognize quantization effects more at lower or higher frequencies, 1 + ( ωωc )6 1 + (− ωs 2 )3 Solution
c 2π
i.e., where should the quantization be more exact? a) Time limited function, z = e j N
a) Determine the poles of H(s) and write H(s). b) Fk = f (x)| − j 2π
Solution x=e N
b) We would like to have a 3-dB edge frequency at Ωc = π/4. Use the bilinear transform to modify the c) E.g., f (x) = a0 + (a1 + (a2 + a3 x)x)x
a) Frequency shift selected filter response to become a digital filter with the same principle characteristic.
d) FIR, Bessel, Butterworth, Tchebyshev, Cauer
b) Subsampled versions in an interlaced fashion c) Although we used the same characteristic, which properties or parameters are strictly preserved by the
e) recursive
c) N = 2n , i.e., n = log2 (N), hence, log2 (N) stages. bilinear transform, which ones are not?
f) 31 or 32 to stick to powers of 2
d) The quantization should be more exact at lower frequencies, since quantization steps would create d) Now, we think of using the samples of the impulse response of an analog filter to define the digital
high-frequency artefacts. filter. This is called “Impulse Invariance.” Why would one possibly have issues with aliasing in this case?
What is the basic underlying theorem? (1 word!)
Solution
2
a) Poles: − ωs 2 = (−1)1/3 = e j(2k+1)π/3 =⇒ sk = ωc e j[π/2+(2k+1)π/6] , k = 0, 1, 2
c
1
H(s) =
∏2k=0 (s−sk )
b) ωc = T2 · tan(Ωc /2) = T2 · tan(π/8) = 0.8284 T
1
c) H(z) = 2 2 1−z −1
∏k=0 T · −1 −sk
1+z
d) Sampling
5 6 7
7.) Single-carrier modulation 8.) Some mixed comms’ questions Final Exam DSP
a) To simplify the average power calculation, for bigger quadratic signal sets a continuous
√ approximation
√ a) Describe the main difference of MMSE equalizer setting and the LMS counterpart. A similar pair of May 24, 2018
is often√ √ a uniform distribution over a square of size 2( M −1)a×2( M −1)a,
used. Hence, you assume solutions was also introduce for ZF, but using the same name for both.
i.e., (− M + 1)a ≤ x, y ≤ ( M − 1)a. Determine the average power integrating over that square. What b) Tomlinson-Harashima precoding moved the feedback filter to the transmitter. What is the advantage Prof. Dr.-Ing. W. Henkel
is the difference to the exact formula PM-QAM = a2 32 (M − 1)? for using error correcting codes?
b) We now assume we could place the modulation points equally distributed inside a circle. The area Describe the signal set that the receiver would relate to for detection or decoding.
shall be the same as under a). Compute the average power again. Which one is lower? This you might c) Assume an oversampled realization of a Decision Feedback Equalizer. Which filters (FF or FB) run at
already be able to tell without any computation. an oversampled clock? If an adaptation algorithm like LMS is used, at what rate has it to be activated?
√ ................................................. ................................................. .................................................
c) Show that applying the ℜe in 2ℜe{∑n (an + jbn )g(t − nT )e jωct } leads to two conjugate symmetric What components of a non-oversampled structure will now be realized by the FF filter? Name Immatriculation No. Signature
bands.
d) Inter-symbol interference (ISI) in base band or single carrier transmission has a counterpart in multi-
carrier transmission, i.e., also the Nyquist criterion has a counterpart. Explain shortly. individual scores final
Solution grade
e) What is the function of the cyclic prefix in multicarrier modulation?
a) First assume a 1 × 1 square 1 2 3 4 5 6 7 8 Σ
Z +1/2 Z +1/2
Solution
P2 = x2 + y2 dydx
x=−1/2 y=−1/2 a) MMSE: equalizer setting dependent on known channel transfer function and noise PSD, LMS: itera-
Z +1/2 +1/2 tive adaptive counterpart without previous knowledge about channel and noise. Likewise, there are ZF
= x2 y + y3 /3 y=−1/2
dx options.
x=−1/2
# " b) No immediate feedback of decided symbols required, hence arbitrary decoding delay possible 1.) FIR/IIR filter
2 1 3
Z +1/2
= dx x2 + c) FF at oversampled clock, FB at symbol rate, adaptation at symbol rate, oversampled FF realizes We do a measurement of some baseband channel using a vector network analyzer. That analyzer can only
3 2 x=−1/2
matched and whitening filter. do 2001 measurement points and the lowest possible measurement frequency is 30 kHz and we liked to
" #+1/2
2 1 3 d) The multicarrier counterpart is ICI with a dual Nyquist condition, determining the pulse shape in time measure up to 100 MHz. The zero frequency we cannot measure. For simplicity, we might set the value
= x3 /3 + x domain, e.g., a rectangular shape leads to sinc functions with zeros at neighboring carriers. there to zero, assuming we had an inductive coupling with a Balun (transformer), which suppresses DC,
3 2
x=−1/2 anyhow.
e) CP makes the channel appear cyclic.
4 1 3 1
= = Up to 100 MHz, we like to keep the frequency response unaltered, but we are allowed to measure, e.g.,
3 2 6 up to 110 MHz.
√ √ a) What is the frequency spacing of your measurements?
Since the length of the square is not 1, but 2( M − 1)a, the power has to be multiplied with 4( M −
1)2 a2 , leading to b) Now, we think of 4-times oversampling relative to the user spectrum width of 100 MHz (one-sided).
(M) 1 √ 2 √ What will then be the sampling frequency?
P2 = · 4( M − 1)2 a2 = ( M − 1)2 a2
6 3 c) Design a window with a raised cosine roll-off, where the roll-off is placed between 100 and 110 MHz.
b) Write this window dependent on f . Reformulate this in a discrete form given the 2001 frequency points.
√ d) Now, for determining the corresponding impulse response, we like to apply an IFFT (Cooley-Tukey
Assume the area to be one first, i.e., πR2 = 1 =⇒ R = 1/ π
type). Determine the content of the vector of length N before applying the IFFT. This means, choose a
Z R R π 1 suitable N first and then describe the contents of the vector.
P◦ = r2 · 2πrdr = 2π r4 /40 = R4 = < P2 e) After the IFFT, how do you extract the impulse response? (one word!)
0 2 2π
You could have told this already without computation, since the power is a quadratic function and hence, f) The impulse response is too long. How would you shorten it? (one word!)
the corners of the square have a strong influence. g) Imagine now that you can approximate the frequency response by an IIR filter. The transfer function
Just as under a), we compute is then F(z) = B(z)
A(z) . You would like to shorten the impulse response and thereby make it finite in length
(M) 1 √ 2 √ (M) such that it is given by the degree of B(z) + 1. Which FIR filter could you apply to shorten the impulse
P◦ = · 4( M − 1)2 a2 = ( M − 1)2 a2 < P2 response as desired?
2π π
(This is a possibility for time-domain equalization for multicarrier modulation, however, not a well-
c) performing one.)
√ 1 1
2ℜe{∑(an + jbn )g(t −nT )e jωct } = √ ∑ g(t −nT )(an + jbn )e+ jωct + √ ∑ g(t −nT )(an − jbn )e− jωct
n 2 n 2 n
8 9 1
2.) Linear prediction, lattice filter 5.) IIR filter, Bilinear transform, impulse invariance 7.) Single-carrier modulation
a) In the Levinson-Durbin algorithm and in the design of lattice filters there was a relation between the The bilinear transform is represented by a frequency transformation a) To simplify the average power calculation, for bigger quadratic signal sets a continuous
√ approximation
√
vectors A, B and fi (n), gi (n), respectively. Express the meaning of the two solutions. (few words!) is often√ √ a uniform distribution over a square of size 2( M −1)a×2( M −1)a,
used. Hence, you assume
b) What is hence the relation between A(z) and B(z) (z-domain formulation) and what would be the 2 −1 ωT i.e., (− M + 1)a ≤ x, y ≤ ( M − 1)a. Determine the average power integrating over that square. What
ω = · tan(Ω/2) or Ω = 2 · tan
relation between the zeros of both polynomials? T 2 is the difference to the exact formula PM-QAM = a2 32 (M − 1)?
c) Let A(z) represent a minimum-phase system. What can you say about B(z)? What would be the zero and b) We now assume we could place the modulation points equally distributed inside a circle. The area
2 1 − z−1 shall be the same as under a). Compute the average power again. Which one is lower? This you might
locations for both? s= ·
T 1 + z−1 already be able to tell without any computation.
d) The Levinson-Durbin algorithm is known to be problematic for certain correlation matrices. What √
property makes it fail? We are investigating a Butterworth filter with a squared amplitude response given by c) Show that applying the ℜe in 2ℜe{∑n (an + jbn )g(t − nT )e jωct } leads to two conjugate symmetric
bands.
1 1
3.) Video coding |H( jω)|2 = =⇒ H(s) · H(−s) = 2
1 + ( ωωc )6 1 + (− ωs 2 )3 8.) Some mixed comms’ questions
a) Assume to use a time-continuous Fourier transform, but in 2 dimensions over a square region defined c
by 0 ≤ x, y < L. The original-domain function is written as f (x, y). Formulate a 2-dimensional Fourier a) Describe the main difference of MMSE equalizer setting and the LMS counterpart. A similar pair of
transform. a) Determine the poles of H(s) and write H(s). solutions was also introduce for ZF, but using the same name for both.
b) Extend the 2D-function f (x, y) outside the given area in such a way that one can rewrite the 2D-Fourier b) We would like to have a 3-dB edge frequency at Ωc = π/4. Use the bilinear transform to modify the b) Tomlinson-Harashima precoding moved the feedback filter to the transmitter. What is the advantage
transform into a 2D real transformation. selected filter response to become a digital filter with the same principle characteristic. for using error correcting codes?
c) A so-called zone-plate is put in front of a camera or stored as a file. Now, you may see the original c) Although we used the same characteristic, which properties or parameters are strictly preserved by the Describe the signal set that the receiver would relate to for detection or decoding.
concentric rings on the screen of your laptop, but “surprisingly” additional rings in vertical and horizontal bilinear transform, which ones are not? c) Assume an oversampled realization of a Decision Feedback Equalizer. Which filters (FF or FB) run at
direction on a projection. Why is this happening and what does this tell about the spatial resolution of d) Now, we think of using the samples of the impulse response of an analog filter to define the digital an oversampled clock? If an adaptation algorithm like LMS is used, at what rate has it to be activated?
your laptop screen and the projector? filter. This is called “Impulse Invariance.” Why would one possibly have issues with aliasing in this case? What components of a non-oversampled structure will now be realized by the FF filter?
d) In time direction, one uses so-called I, P, and B frames, where the latter two result from prediction What is the basic underlying theorem? (1 word!) d) Inter-symbol interference (ISI) in base band or single carrier transmission has a counterpart in multi-
in one or both directions, respectively. What is the advantage of using prediction, i.e., what is actually carrier transmission, i.e., also the Nyquist criterion has a counterpart. Explain shortly.
stored or transmitted for those frames? Why would you, from time to time, still like to have an original I 6.) Some mixed DSP questions
e) What is the function of the cyclic prefix in multicarrier modulation?
frame to be stored or transmitted? a) Under which condition does a DFT represent the z-transform correctly and what are the values of z
that are computed by the DFT?
4.) Polyphase filter, DFT, FFT b) Write the DFT as values of a polynomial.
a) The DFT is given by c) How does the Horner scheme for computing the value of a polynomial work? Is it an FFT algorithm?
N
Fk = ∑ fi e j N ik
2π d) Order the different filter FIR and IIR types in terms of complexity.
i=0 e) Which filter types could lead to limit cycles?
If you do not like to interpret this as a transform, how could you interpret the product with the exponential f) When multiplying two fixed-point two’s complement numbers of 16 Bit length, how long do you
otherwise? (one word) expect the result to be?
b) How is the relation between the individual filter impulse responses in a polyphase filter bank?
c) The complexity of a Cooley-Tukey FFT is known to be proportional to N log2 (N). Intentionally, the
logarithm is written with base 2. What does this logarithm stand for, knowing the structure of the FFT?
d) Imagine, you would split an audio signal into subbands and would think of quantizing the sub-signals
with different qualities. Would you recognize quantization effects more at lower or higher frequencies,
i.e., where should the quantization be more exact?
2 3 4
Midterm DSP 3.) Sampling d.) Modify H(z) such that you can easily plot the corresponding shift register
April 5, 2017 Let us assume the following Fourier spectrum: circuit. Draw this circuit!
1 2 3
Midterm Exam DSP Solution 2.) Laplace transform
08.04.2010
Length-2 DFT matrix: S
R0 iL (t)
1 1
Prof. Dr.-Ing. W. Henkel 1 −1
RL
Length-4 DFT/IDFT matrix:
................................................. ................................................. ................................................. R1 D
Name Immatriculation No. Signature
1 1 1 1 1 1 1 1 u0
jπ/2 e jπ e j3π/2 1 w1 w2 w3 L
1 e jπ with w = e jπ/2
individual scores sum final =
1 e e j2π e j3π 1 w2 w4 w6
grade 1 e j3π/2 e j3π e j9π/2 1 w3 w6 w9
1 2 3 4 5 6 7 The shown network is fed from a constant voltage source. For t < 0, no current
1 1 1 1 1 0 0 0 1 1 1 1 is flowing through the coil with the inductance L and the wire impedance RL . At
1 w1 w2 w3 0 0 1 0
1 w2 w1 w3 the time t = 0, the switch S is closed and reopened at t = t1 > 0. We idealize the
1 w2 w4 w6 ·
= =
0 1 0 0 1 w4 w2 w6 resistance of the diode in forward direction to be zero and in opposite direction to
1 w3 w6 w9 0 0 0 1 1 w6 w3 w9 be infinity. Compute the current through the inductor for 0 ≤ t ≤ t1 and t1 ≤ t < ∞.
Use the one-sided (unilateral) Laplace transform.
1.) DFT, FFT
√ You may require the following relation:
In here, we ignore any factor of 1/N or 1/ N. 1 1 1 1
1 −1 j − j I2 I2 I2 W2
The length-2 DFT can be described as a matrix with very trivial entries. Determine = 1 1
1 1 −1 −1 = I2 −I2 · ·
e−γt − e−αt ◦––•
D2 W2
this matrix. α−γ (s + α)(s + γ)
1 −1 − j j
Write down the DFT matrix for a DFT of length 4. Apply permutations such that
you realize the structure of a DFT of length 2 in that matrix. Now describe the Solution
I2 I2
length-4 DFT in terms of the length-2 ones and outline, what operations become W2 is the length-2 DFT and also has the same structure, only invol-
I2 −I2 IL (s)
visible in this matrix extension.
ving other components. We observe the alternation of length-2 DFT-like operati-
How does the length-2 DFT fit into the elementary operations that you have ob- ons and rotations (twiddling factors). u0 RL
served. R0 s
R0 R1
sL
1
u0 RL +sL
IL (s) = · 1
R0 s R0 + R11 + RL +sL
1
1 2 3
√
u0 R0 R1 z∞ = ± −1/2 = ± j/ 2
p
= · 3.) Z- and Laplace transforms
R0 s (R0 + R1 )(RL + sL) + R0 R1 We consider a digital filter of the following form:
u 0 R1 1
= ·
s R0 RL + R1 RL + R0 R1 + s(R0 + R1 )L x(n)
u 0 R1 1
= ·
(R0 + R1 )sL s + R0 RL +R1 RL +R0 R1 b0
(R0 +R1 )L
u 0 R1
· 1 − e−αt for 0 ≤ t ≤ t1
iL (t) = y(n)
(R0 + R1 )L
R0 RL + R1 RL + R0 R1
α=
(R0 + R1 )L
−a2 −a1
RL
b.)
IL (s) We consider the cases where a1 = 1/2, a2 = 0 and a1 = 0, a2 = 1/2.
a.) Determine the poles of the transfer function in z for both cases and draw them 2
into a diagram.
1.8
sL b.) Try to sketch the absolute value of the transfer function over Ω = ωT .
iL (t1)/s · e−st1 c.) Can you explain the difference in the results from a. and b.? Think of one
1.6
|Y(jω)/X(jω)|
special filter type that we discussed. 1.4
d.) What would be the corresponding pole locations in the Laplace plane? 1.2
e.) If you would move the forward coefficient b0 to the other positions of the shift 1
RL Solution −3 −2 −1 0 1 2 3
ωT
iL (t1)/s · e−st1 a.) 1st case:
1
Y (z) = b0 X (z) − Y (z)z−1
2 c.) Comb filter by doubling the delay
Y (z) b0 b0 z d.)
sL iL (t1) −st1 = =
IL (s) = · ·e X (z) 1 + 12 z−1 z + 12 e(σ+ jω)T = −1/2 =⇒ ωT = ±π and σT = ln(1/2)
RL + sL s √ √
∞
1 z = −1/2 e(σ+ jω)T = ± j/ 2 =⇒ ωT = ±π/2 and σT = ln(1/ 2)
= · iL (t1) · e−st1
s + R/L
RL
2nd case:
iL (t) = iL (t1)e−(t−t1 ) L Y (z) b0 b0 z 2
= =
X (z) 1 + 12 z−2 z2 + 12
4 5 6
7 8 9
A and B should be conjugates to obtain a real sequence! 6.) Communications 7.) Communications / fast convolution
Assume a channel to be ideal, i.e., the transfer function is assumed to be one. In multicarrier modulation a trick is applied to ensure that the channel effect will
Y (z) A B
= √ + √ Furthermore, let the noise power spectral density be constant (white noise). only be a multiplication with a frequency-dependent factor in DFT domain. De-
z z − 41 (1 + 7 j) z − 41 (1 − 7 j) scribe the trick. How would you perform the equalization in DFT domain, i.e.,
Let a transmit filter be chosen as the square root of the raised cosine function. How
z z which approach would you follow (ZF or MMSE); why? If you would instead ha-
Y (z) = A · 1
√ +B· √ would the receive filter be chosen to optimize performance? Which two criteria
z − 4 (1 + 7 j) z − 41 (1 − 7 j) are automatically fulfilled by this choice. ve a baseband or single-carrier transmission, which solution would be preferred?
1 1 Explain your choices.
= A· √ +B· √
1 − 41 (1 + 7 j)z−1 1 − 14 (1 − 7 j)z−1 Solution In fast convolution, related tricks had been introduced. What are essential methods
n n there and how do they work?
√ √
1 1 The receive filter should be a square-root raised-cosine filter, as well. This ensures:
y(n) = A · (1 + 7 j) + B · (1 − 7 j)
4 4
√
n
√
n • Both, transmit and receive filters together multiply to a raised-cosine filter Solution
1 1 1 1 1 1
= − + j √ · (1 + 7 j) + − − j √ · (1 − 7 j) fulfilling the Nyquist criterion. Cyclic prefix; ZF, i.e., dividing through the multiplicative complex factor repre-
4 4 7 4 4 4 7 4
√ senting the channel transfer function; practically multiplying with the inverse of
n = 0, 1, 2, ... • Since the impulse-response of the RC filter is even, mirroring does not
the channel transfer function
change the impulse response, i.e., this choice of the receive filter is also the
matched filter. In baseband or single-carrier transmission, the choice would be MMSE, since
5.) Communications there the information is in time domain. Zero forcing would there lead to noise
Draw the structure of a baseband communication system and describe the function With these properties, no further processing would be needed. The SNR at the enhancement. If the information is in frequency (DFT) domain, the ZF factors
of the main components. sampling instant is maximized and there is no inter-symbol interference. will influence signal and noise in the same way, not changing the SNR at all.
Overlap-Add and Overlap-Save. Increasing the length by an overlap of the length
Solution of the impulse response minus one, duplicating overlap blocks or zeroing ends.
noise, e.g., AWGN
10 11 12
Table of Laplace and Z-transforms
X(s) x(t) x(kT) or x(k) X(z)
Kronecker delta δ0(k)
1. – – 1 k=0 1
0 k≠0
δ0(n-k)
2. – – 1 n=k z-k
0 n≠k
1 1
3. 1(t) 1(k)
s 1 − z −1
1 1
4. e-at e-akT
s+a 1 − e − aT z −1
1 Tz −1
5.
s2
t kT
(1 − z ) −1 2
2 ( )
T 2 z −1 1 + z −1
t2 (kT)2
(1 − z )
6. −1 3
s3
6 T z (1 + 4 z + z )
3 −1 −1 −2
t3 (kT)3
(1 − z )
7. −1 4
s4
a
1 – e-at 1 – e-akT
(1 − e )z − aT −1
8.
s (s + a ) (1 − z )(1 − e z ) −1 − aT −1
b−a (e − e )z − aT − bT −1
9.
(s + a )(s + b ) e-at – e-bt e-akT – e-bkT
(1 − e z )(1 − e z )
− aT −1 −bT −1
1 Te − aT z −1
te-at kTe-akT
(1 − e )
10.
(s + a ) 2 − aT
z −1 2
s 1 − (1 + aT )e − aT z −1
(1 – at)e-at (1 – akT)e-akT
(1 − e z )
11.
(s + a )2 − aT −1 2
2 ( ) T 2 e − aT 1 + e − aT z −1 z −1
t2e-at (kT)2e-akT
(1 − e z )
12.
(s + a )3 − aT −1 3
a 2
(1 − z ) (1 − e z )
13.
s 2 (s + a ) −1 2 − aT −1
ω z −1 sin ωT
14. sin ωt sin ωkT
s2 +ω 2 1 − 2 z −1 cos ωT + z − 2
s 1 − z −1 cos ωT
15. cos ωt cos ωkT
s2 +ω 2 1 − 2 z −1 cos ωT + z − 2
ω e − aT z −1 sin ωT
16. e-at sin ωt e-akT sin ωkT
(s + a )2 + ω 2 1 − 2e − aT z −1 cos ωT + e − 2 aT z − 2
s+a 1 − e − aT z −1 cos ωT
17. e-at cos ωt e-akT cos ωkT
(s + a )2 + ω 2 1 − 2e − aT z −1 cos ωT + e − 2 aT z − 2
1
18. – – ak
1 − az −1
ak-1 z −1
19. – –
k = 1, 2, 3, … 1 − az −1
z −1
kak-1
(1 − az )
20. – – −1 2
k2ak-1
( )
z −1 1 + az −1
(1 − az )
21. – – −1 3
(
z −1 1 + 4az −1 + a 2 z −2 )
k3ak-1
(1 − az )
22. – – −1 4
k4ak-1
(
z −1 1 + 11az −1 + 11a 2 z −2 + a 3 z −3 )
(1 − az )
23. – – −1 5
1
24. – – ak cos kπ
1 + az −1
∞
Z{x(k)} = X ( z ) = ∑ x(k ) z − k
k =0
1. ax(t ) aX (z )
2. ax1( t ) + bx2 ( t ) aX 1 ( z ) + bX 2 ( z )
Prof. Dr.-Ing. Werner Henkel
3. x( t + T ) or x( k + 1 ) zX ( z ) − zx( 0 )
Digital Signal Processing
4. x( t + 2T ) z 2 X ( z ) − z 2 x( 0 ) − zx( T )
5. x( k + 2 ) z 2 X ( z ) − z 2 x( 0 ) − zx( 1 )
6. x( t + kT ) z X ( z ) − z k x( 0 ) − z k −1 x( T ) − K − zx( kT − T )
k
7. x( t − kT ) z −k X ( z )
8. x( n + k ) z k X ( z ) − z k x( 0 ) − z k −1 x( 1 ) − K − zx( k1 − 1 )
9. x( n − k ) z −k X ( z )
d
10. tx( t ) − Tz X( z )
dz
d
11. kx( k ) −z X( z )
dz
12. e − at x( t ) X ( zeaT )
13. e − ak x( k ) X ( ze a )
⎛z⎞
14. a k x( k ) X⎜ ⎟
⎝a⎠
d ⎛z⎞
−z
— Contents —
15. ka k x( k ) X⎜ ⎟
dz ⎝ a ⎠
17. x( ∞ ) [( ) ] ( )
lim 1 − z −1 X ( z ) if 1 − z −1 X ( z ) is analytic on and outside the unit circle
z →1
18. ∇x( k ) = x( k ) − x( k − 1 ) (1 − z )X ( z )
−1
∑ x( k )
1
1
20. X( z )
k =0 1 − z −1
∂ ∂
21. x( t , a ) X ( z,a )
∂a ∂a
m
⎛ d ⎞
22. k m x( k ) ⎜− z ⎟ X( z )
⎝ dz ⎠
n
23. ∑ x( kT ) y( nT − kT )
k =0
X ( z )Y ( z )
∞
24. ∑ x( k )
k =0
X (1)
W. Henkel, Jacobs University Bremen 5 W. Henkel, Jacobs University Bremen 8 W. Henkel, Jacobs University Bremen 11
1 The linear transforms - a quick overview F (jω) ∗ 2π
X
∞
δ(ω − kω0 )
2 The sampling theorem k=−∞
Type transform rule descriptions
inverse transf. / transform We prove the correspondence
ν=∞ −2ω0 −ω0 0 ω0 2ω0 ω
a0 X
∞ ∞
real f (t) = + aν cos(νω0 t) + bν sin(νω0 t) periodic signals X X
2 Recovering the original spectrum is possible by means of a brickwall filter
Z ν=1 xs (t) = T · δ(t − kT ) ◦––• Xs (jω) = 2π · δ(ω − kω0 )
2 +T /2 k=−∞ k=−∞ which is one within [−ω/2, ω/2] and zero elsewhere. In time-domain, this
Fourier series aν = f (t) cos(νω0 t)dt with period T
T −T /2
Z Proof: means a convolution with a sinc function.
2 +T /2 2π
bν = f (t) sin(νω0 t)dt ω0 =
T −T /2
T
ejω0 t ◦––• 2πδ(ω − ω0 )
ν=∞
X X X ∞ ∞
complex f (t) = cν ejνω0 t cν = (aν − jbν )/2 ck ejkω0 t ◦––• 2π ck δ(ω − kω0 ) X 1 sin(πt/T ) X sin π t−kT
T
ν=−∞ T δ(t − kT ) · f (t) ∗ · = f (kT ) ·
Z +T /2 k k T πt/T π t−kT
T
1 k=−∞ k=−∞
Fourier series cν = f (t)e−jνω0 t dt c−ν = c∗ν
T −T /2
W. Henkel, Jacobs University Bremen 4 W. Henkel, Jacobs University Bremen 7 W. Henkel, Jacobs University Bremen 10
Type transform rule descriptions
Imagine now a given Fourier spectrum F (jω) •––◦ f (t),
Z-Transform fi = f (iT ) ◦––• F (z)
Digital Signal Processing in Communications P F (jω)
∞
L n=0 δ(t − nT ) · f (t) =⇒ for difference equations,
✷ Channel properties ∞
X
F (z) = fn · z −n i.e., recursions
✷ Passband and equivalent baseband description n=0 ω
N −1
1 X 2π
✷ PAM Discrete fi = √ · Fk ej N ik IDFT
N k=0 which is sampled by multiplying it with the Dirac sequence, i.e.,
2π
✷ QAM Fourier = · F x = (ej N )i
√1
N
∞
X ∞
X
✷ Equalizer structures, adaptation with zero forcing and least mean Transform i = 0, 1, . . . , N − 1
N −1
T· δ(t − kT ) · f (t) ◦––• 2π · δ(ω − kω0 ) ∗ F (jω)
squares 1 X 2π
k=−∞ k=−∞
(DFT/IDFT) Fk = √ · fk e−j N ik DFT
N i=0
✷ Multicarrier transmission, wavelets, filter banks 2π
Convolution with the Dirac sequence in frequency domain means a periodic
= · f x = (e−j N )k
√1
N spectrum
k = 0, 1, . . . , N − 1
W. Henkel, Jacobs University Bremen 3 W. Henkel, Jacobs University Bremen 6 W. Henkel, Jacobs University Bremen 9
✷ Least squares identification and prediction (LPC, Toeplitz algorithms)
✷ Filter structures II
✷ Design of digital filters Type transform rule descriptions Sampling in time domain:
✷ Short introduction to wave digital filters inverse transf. / transform P∞
xs (t) = T· k=−∞ δ(t − kT )
Fourier f (t) ◦––• F (jω) , F (jω) = F (f (t))
✷ Sampling rate conversion Z +∞ P∞ jkω0 t 1
R T /2
1 = T· k=−∞ ck e with ck = δ(t)e−jkω0 t dt = 1/T
Transform f (t) = F (jω)ejωt dω T −→ ∞ T −T /2
✷ FFT algorithms 2πZ −∞
+∞ R∞ (Fourier series) ω0 = 2π/T
F (jω) = f (t)e−jωt dt |f (t)|dt < ∞
✷ Quadrature mirror filters −∞
=
P∞ jkω0 t
k=−∞ e
−∞
Laplace f (t) ◦––• F (s), F (s) = L (f (t))
✷ Filter banks Z σ+j∞
1
Transform f (t) = F (s)est ds s = σ + jω ∞
X ∞
X
✷ (Adaptive filters under equalization in the communications part) 2πj σ−j∞
Z +∞ ejkω0 t ◦––• 2π · δ(ω − kω0 )
F (s) = f (t)e−st dt =Fourier tr. of f (t) · e−σt k=−∞ k=−∞
✷ Two-dimensional transforms, discrete cosine transform
0
✷ (Wavelets)
✷ Some aspects of audio and video coding
W. Henkel, Jacobs University Bremen 14 W. Henkel, Jacobs University Bremen 17 W. Henkel, Jacobs University Bremen 20
Analysis of Linear Systems (in discrete time domain) Stability, again ...
It is known that networks with a linear phase of the transfer function N
X M
X
should have a symmetric impulse response (Why?). How should the y(n) = − ak y(n − k) + bk x(n − k) |x(n)| ≤ Mx < ∞ =⇒ |y(n)| ≤ My < ∞
k=1 k=0
coefficients ai , bi of a network with delay elements be chosen to ensure a
Linearity allows to easily provide the response to a sum of sub-sequences ∞
linear phase? P X
x(n) = k ck xk (n) y(n) = h(k)x(n − k)
x(t)
! k=−∞
X X X ∞
=⇒ y(n) = F (x(n)) = F ck xk (n) = ck F (xk (n)) = ck yk (n) X ∞
X
b3 b2 b1 b0
|y(n)| =
h(k)x(n − k) ≤ |h(k)| · |x(n − k)|
k k k
P∞ k=−∞ k=−∞
y(t) Typically, we split into a collection of impulses x(n) = k=−∞ x(k)δ(n − k)
T T T !
∞
X
∞
X X∞ =⇒ |y(n)| ≤ Mx |h(k)|
=⇒ y(n) = F (x(n)) = F x(k)δ(n − k)
= x(k)F (δ(n − k)) = k=−∞
−a3 −a2 −a1
k=−∞ k=−∞ Stability means the sum of the impulse reponse samples needs to be
∞
X limited, i.e.,
= x(k)h(n, k) X∞
What does a linear phase really mean? k=−∞ |h(k)| < ∞
k=−∞
For linear and time-invariant systems, i.e., h(n, k) = h(n − k), we obtain
W. Henkel, Jacobs University Bremen 13 W. Henkel, Jacobs University Bremen 16 W. Henkel, Jacobs University Bremen 19
Some first examples of networks with delay elements Illustration of the discrete convolution
Let us consider the following structure
u(t)
b1 b0
Classification of discrete-time systems
Memoryless versus memory of duration N
y(t)
T Time-invariance: x(n) −→ y(n) =⇒ x(n − k) −→ y(n − k)
x1 (t)
Linearity: F (a1 x1 (n) + a2 x2 (n)) = a1 F (x1 (n)) + a2 F (x2 (n))
−a1
Causality: y(n) = F (x(n), x(n − 1), x(n − 2), ...)
Stability (bounded input-bounded output)
a) Determine the transfer function Y (s)/U (s) in Laplace domain.
|x(n)| ≤ Mx < ∞ =⇒ |y(n)| ≤ My < ∞
b) For s = jω, we obtain the frequency response in Fourier domain. What
are the properties
of this frequency
n response?
o
Draw UY(jω)b
(jω) Y (jω)
1
and ϕ = −arc U (jω) for a1 = 0, b0 /b1 = 1. Use the
normalized frequency Ω = ωT .
W. Henkel, Jacobs University Bremen 12 W. Henkel, Jacobs University Bremen 15 W. Henkel, Jacobs University Bremen 18
3 Discrete-Time Signals
Quantization Some elementary discrete-time signals 1, for n = 0
Unit sample sequence (time-discrete Dirac): δ(n) =
How about the quantization error of a linear quantizer with a step size of ∆ 0, for n 6= 0
1, for n ≥ 0
and an equally distributed input within the quantization intervals? Let e
Unit step signal u(n) =
be the quantization error. 0, for n < 0
the convolution:
Classification of discrete-time signals ∞
X
Z ∆/2 P∞ y(n) = x(k)h(n − k)
1 Energy: E = |x(n)|2 ,
E{e2 } = e2 de = n=−∞
PN k=−∞
∆ −∆/2 ∆/2 Power: E = limN →∞ 1
|x(n)|2
∆/2 2N +1 n=−N
1 e3 ∆2 ∆
Periodic signal: x(n + N ) = x(n)
= −∆/2
∆ 3 −∆/2 12
Even (symmetric) signal: x(−n) = x(n), Odd (antimetric) signal:
x(−n) = −x(n)
1 1
xe (n) = [x(n)+x(−n)] , xo (n) = [x(n)−x(−n)] , x(n) = xe (n)+xo (n)
2 2
W. Henkel, Jacobs University Bremen 23 W. Henkel, Jacobs University Bremen 26 W. Henkel, Jacobs University Bremen 29
General form of difference equation with constant coefficients In the case of stability, the particular solution can be obtained as the limit
Comments on the step response N M for n → ∞
X X 1
y(n) = − ak y(n − k) + bk x(n − k) yp (n) = lim y(n) =
From u(n) = δ(n) + u(n − 1) the step response can be written as n→∞ 1 + a1
k=1 k=0
N M The impulse response can be determined by setting the particular solution
X X
s(n) = h(n) + s(n − 1) ⇐⇒ h(n) = s(n) − s(n − 1) ⇐⇒ ak y(n − k) = bk x(n − k) to zero and use the Ck parameters to fulfil inital contitions. Thus, we only
k=0 k=0 use the homogeneous solution
Analogously to differential equations, we obtain a homogeneous solution N
X
∞
X (zero-input response) and a particular solution h(n) = yh (n) = Ck λkn
y(n) = x(k) [s(n − k) − s(n − k − 1)]
Homogeneous solution: yh (n) = λn k=1
k=−∞
∞ ∞ N
(At the same time, the impulse response is the zero-state response.) BIBO
X X X
= x(k)s(n − k) − x(k)s(n − 1 − k) ak λ(n−k) = 0 stability requires
k=−∞ k=−∞ k=0 ∞
N
∞ X
∞ N
X X
X X
Ck λkn ≤ |λk |n
= ys (n) − ys (n − 1)
λn−N λN + a1 λN −1 + a2 λN −2 + · · · + aN −1 λ + aN = 0 |h(n)| = |Ck |
n=0 n=0 k=1 n=0 k=1
In paranthesis, we see the so-called characteristic polynomial ∞ ∞
How about the correponding continuous relations? X X
yh (n) = C1 λ1n + C2 λ2n + · · · + CN λN
n |λk |n < ∞ ⇐⇒ |h(n)| < ∞
n=0 n=0
W. Henkel, Jacobs University Bremen 22 W. Henkel, Jacobs University Bremen 25 W. Henkel, Jacobs University Bremen 28
Recursive and non-recursive discrete-time systems Linear time-invariant systems and constant-coefficient difference
Example: cumulative average equations
n Example: y(n) = ay(n − 1) + x(n)
1 X Let us compute the zero-state response, i.e., y(−1) = 0
y(n) = x(k)
n+1 y(0) = ay(−1) + x(0)
k=0 From the difference eq., we obtain
y(1) = ay(0) + x(1) = a2 y(−1) + ax(0) + x(1)
n−1
X
y(2) = ay(1) + x(2) = a3 y(−1) + a2 x(0) + ax(1) + x(2) y(0) + a1 y(−1) = 1
(n + 1)y(n) = x(k) + x(n) = ny(n − 1) + x(n)
k=0 . y(0) = 1
..
n 1
y(n) = y(n − 1) + x(n) y(n) = ay(n − 1) + x(n) From total solution,
n+1 n+1
A system whose output y(n) at time n depends on any number of past = an+1 y(−1) + an x(0) + an−1 x(1) + · · · + ax(n − 1) + x(n) 1
y(0) = C + =⇒ C = . . .
output values y(n − 1), y(n − 2), . . . is called a recursive system. 1 + a1
n
X
y(n) = F (y(n − 1), y(n − 2), . . . , y(n − N ), x(n), x(n − 1), . . . , x(n − M )) y(n) = an+1 y(−1) + ak x(n − k) , n ≥ 0 The yields the final total solution y(n) = . . .
k=0
Non-recursive: y(n) = F (x(n), x(n − 1), . . . , x(n − M )) Pn
Zero-state solution: yzs (n) = k=0 ak x(n − k)
Non-recursive is FIR! Zero-input solution: yzi (n) = an+1 y(−1)
W. Henkel, Jacobs University Bremen 21 W. Henkel, Jacobs University Bremen 24 W. Henkel, Jacobs University Bremen 27
Example: Solutions for y(n) + a1 y(n − 1) = u(n) where u(n) is the unit
Finite-duration versus infinite-duration impulse responses — FIR Reminder: the continuous counterpart step sequence.
versus IIR Homogeneous solution: yh (n) = C · λn
d
δ(t) = s(t) ...
dt
FIR: n(n) = 0 for n < 0 and n ≥ M , i.e.,
Let ys (t) be the step response and h(t) the impulse response of a linear In case of multiple roots, we have the solution
M −1
X filter, i.e.,
y(n) = h(k)x(n − k) Z ∞ yh (n) = C1 λ1n + C2 nλ1n + C3 n2 λ1n + · · · + Cm nm−1 λ1n +
k=0 ys (t) = h(τ )s(t − τ )dτ
−∞ +Cm+1 λnm+1 + · · · + CN λn1 +
Z ∞
IIR: d ∂s(t − τ )
∞
X ys (t) = h(τ ) dτ = h(t) Particular solution: yp (n) = K · u(n)
y(n) = h(k)x(n − k) dt ∞ | ∂t
{z }
k=0 δ(t−τ ) =⇒ K = . . .
Total solution: y(n) = yh (n) + yp (n)
W. Henkel, Jacobs University Bremen 32 W. Henkel, Jacobs University Bremen 35 W. Henkel, Jacobs University Bremen 38
FIR versus IIR; non-recursive versus recursive
FIR means finite impulse response, IIR infinite impulse response.
FIR can be realized by a non-recursive structure and IIR can only be
realized by a recursive structure (or approximated by a long non-recursive
structure).
Recursive:
y(n) = F [y(n − 1), . . . , y(n − N ), x(n), . . . , x(n − M )]
If LTI:
N
X M
X
y(n) = − ak y(n − k) + bk x(n − k)
k=1 k=0
Non-recursive:
y(n) = F [x(n), . . . , x(n − M )]
W. Henkel, Jacobs University Bremen 31 W. Henkel, Jacobs University Bremen 34 W. Henkel, Jacobs University Bremen 37
N M
X X Canonical ‘observer’ structure:
y(n) = − ak y(n − k) + bk x(n − k) i
k=1 k=0
Special cases:
PM
FIR: y(n) = k=0 bk x(n − k)
M
X bm b2 b1 b0
v(n) = bk x(n − k) impulse resp.: h(k) = bk , k = 0, 1, ..., c
k=0
N
X Purely recursive:
y(n) = − ak y(n − k) + v(n)
N
X −am −a2 −a1
k=1
y(n) = − ak y(n − k) + b0 x(n)
k=1
N
X
w(n) = − ak w(n − k) + x(n) Typical building block of digital filters:
k=1 m
M y(n) = −a1 y(n − 1) − a2 y(n − 2) + b0 x(n) + b1 x(n − 1) + b2 x(n − 2) C(z) = I(z) · (b0 + b1 z −1 + · · · + bm z −1 ) − C(z)(a1 z −1 + · · · + am z m )
X
y(n) = bk w(n − k)
G(z) = C(z)/I(z) = B(z)/A(z)
k=0
W. Henkel, Jacobs University Bremen 30 W. Henkel, Jacobs University Bremen 33 W. Henkel, Jacobs University Bremen 36
b0 + b1 z −1 + · · · + bm z −m
G(z) = B(z)/A(z) =
1 + a1 z −1 + · · · + am z −m
Structures of Linear Discrete-
The two canonical forms ...
Time
Canonical ‘controller’ structure:
Systems
c
C(z) = B(z) · X(z)
y(n) = −a1 y(n−1)+b0 x(n)+b1 x(n−1)
X(z) = I(z) +
b0 b1 b2 bm
+ X(z) · (−a1 z −1 − · · · −
v(n) = b0 x(n) + b1 x(n − 1)
am z −m )
i x
y(n) = −a1 y(n − 1) + v(n) ⇒ X(z) =
I(z)
1+a1 z −1 +···+am z −m
w(n) = −a1 w(n − 1) + x(n) −a1 −a2 −am
y(n) = b0 w(n) + b1 w(n − 1)
G(z) = C(z)/I(z) = B(z)/A(z)
W. Henkel, Jacobs University Bremen 41 W. Henkel, Jacobs University Bremen 44 W. Henkel, Jacobs University Bremen 47
4 Remarks on poles in the continuous case 5 One-sided z-Transform 6 Diagonalization of a Toeplitz matrix with
We compare the solution of the differential equation with the Laplace
Final Value Theorem
the DFT
transform.
Passive RLCM networks can be described by linear differential equations lim x(n) = lim (z − 1)X + (z) We are aware that the cyclic convolution in time domain corresponds to a
n→∞ z→1
with constant coefficients. multiplication in DFT domain.
Proof:
Homogeneous differential equation N
X −1
an y (n) + an−1 y (n−1) + . . . + a1 y ′ + a0 y = 0 x3 (m) = x1 (n) · x2 ((m − n) mod N ) ◦––• X3 (m) = X1 (m) · X2 (m)
∞
X n=0
Characteristic equation Z {x(n + 1) − x(n)} = (x(n + 1) − x(n)) z −n
We rewrite the convolution using a Toeplitz matrix
an λn + an−1 λn−1 + . . . + a1 λ1 + a0 = 0 n=0
!
k
X x2 (0) x2 (N − 1)
x2 (N − 2) ···
Solutions of the homogeneous diff. eq. = lim (x(n + 1) − x(n)) z −n
k→∞ x2 (1) x2 (0) x2 (N − 1)
eλr t , t · eλr t , . . . , tmr −1 · eλr t n=0
!
k
X C := x2 (2) x2 (1) x2 (0)
mr : Multiplicity of the zeros λr of the characteristic equation = lim x(n + 1)z −n − x(n)z −n
... .. .. ..
k→∞
n=0
. . .
Stable is a system, when ℜ{λr } < 0; the limiting case with ℜ{λr } = 0 and
mr = 1 leads to oscillations of constant amplitude or to a constant DC. = lim −x(0) + x(1)(1 − z −1 ) + · · · + x(k)z −(k−1) (1 − z −1 ) + x(k + 1)z −k x2 (N − 1) x2 (N − 2)
k→∞
W. Henkel, Jacobs University Bremen 40 W. Henkel, Jacobs University Bremen 43 W. Henkel, Jacobs University Bremen 46
Recursive realization Stability is given, when the poles of the transfer function Y (s)/X(s) are
M located in the left half-plane.
1 X 1
y(n) = x(n − 1 − k) + [x(n) − x(n − 1 − M )]
M +1 M +1 In the limiting case (limit stable), the poles of Y (s)/X(s) are on the
k=0
1 imaginary axes (and are single).
= y(n − 1) + [x(n) − x(n − 1 − M )]
M +1 limit stable
0110
Equating the two (⋆)-marked results, we finally obtain
0000000
1111111
0000000
1111111
−x(0) + lim x(k + 1) = −x(0) + lim (z − 1)X + (z) ,
0000000
1111111 k→∞ z→1
0000000
1111111
stable
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111 which is the desired property after omitting −x(0) on both sides.
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
1111111
0000000
W. Henkel, Jacobs University Bremen 39 W. Henkel, Jacobs University Bremen 42 W. Henkel, Jacobs University Bremen 45
FIR realizations (non-recursive and recursive) Taking the limit of both sides as z goes to 1, yields
non-recursive: Inhomogeneous differential equation:
M
lim Z {x(n + 1) − x(n)} =
X z→1
y(n) = bk x(n − k) an y (n) + an−1 y (n−1) + . . . + a1 y ′ + a0 y = r(x(t))
= lim lim −x(0) + x(1)(1 − z −1 ) + · · · + x(k)z −(k−1) (1 − z −1 ) + x(k + 1)z −k
k=0 z→1 k→∞
Transformation into the Laplace domain yields
Example “moving average” = lim lim −x(0) + x(1)(1 − z −1 ) + · · · + x(k)z −(k−1) (1 − z −1 ) + x(k + 1)z −k
k→∞ z→1
M an sn Y (s) + an−1 sn−1 Y (s) + . . . + a1 s1 Y (s) + a0 Y (s) + K(s) = R(s) · X(s)
1 X = −x(0) + lim x(k + 1) . (⋆)
y(n) = x(n − k) k→∞
M +1 K(s) contains all terms with initial conditions. Since we are only interested
k=0
Impulse response in the transfer properties regarding a source signal X(s), we assume Applying the translation property of the one-sided z-transform leads to
1 K(s) = 0.
h(n) = Z {x(n + 1) − x(n)} = z(X + (z) − x(0)) − X + (z)
M +1 Solving for Y (s)/X(s), we recognize that the poles correspond to the zeros
= (z − 1)X + (z) − zx(0))
of the characteristic equation.
lim Z {x(n + 1) − x(n)} = lim (z − 1)X + (z) − zx(0))
Y (s) R(s) z→1 z→1
= =
lim (z − 1)X + (z) − lim zx(0))
X(s) an sn + an−1 sn−1 + . . . + a1 s1 + a0 z→1 z→1
= −x(0) + lim (z − 1)X + (z) . (⋆)
z→1
W. Henkel, Jacobs University Bremen 50 W. Henkel, Jacobs University Bremen 53 W. Henkel, Jacobs University Bremen 56
7 Some remarks on the Hilbert transform
1 1 1 1 1 1 1 1
we obtain
1 w2 w4 w6 w1 w3 w5 w7
We assume a causal time-domain function u(t) = 0 for t < 0.
1 w4 1 w4 w2 w6 w2
w6
URo (f ) = H (UI e (f ))
u(t) = uRe (t) + uRo (t) + juI e (t) + juI o (t)
In total, we get 1 w6 w4 w2 w3 w w7 w5 W4 D4 W 4
W8 ·P8 =
=
UR (f ) = URe (f ) + URo (f ) 1 1 1 1 w4 w4 w4 w4 W4 w 4 D4 W 4
1 w2 w4 w6 w5 w7 w1 w3
U (f ) = URe (f ) + URo (f ) + jUI e (f ) + jUI o (f ) Hilbert transform
1 w4 1 w4 w6 w2 w6 w2
We split the real part of the time-domain function into even and odd UI (f ) = UI e (f ) + UI o (f )
components. 1 w6 w4 w2 w7 w5 w3 w1
All Fourier transforms of causal signals represent pairs of Hilbert
uR (t) = uRe (t) + uRo (t)
transforms. Think, e.g., of the Fourier transform of δ(t − t0 )! with
1 1
uRe (t) = (uR (t) + uR (−t)) , uRo (t) = (uR (t) − uR (−t))
2 2
D4 = diag(1, w, w2 , w3 )
W. Henkel, Jacobs University Bremen 49 W. Henkel, Jacobs University Bremen 52 W. Henkel, Jacobs University Bremen 55
From the convolution theorem, we know that 1
Z +∞
UI o (ϕ) −1
Z +∞
URe (ϕ)
URe (f ) = dϕ , UI o (f ) = dϕ
π −∞ f −ϕ π −∞ f −ϕ
X3 = Wx3 = WCx1 = DX1 = DWx1
We define the Hilbert transform to be
8 Fast Fourier Transform (FFT)
WCx1 = DWx1
The DFT matrix was given by
WC = DW Z +∞
1 1 g(y)
WCW−1 = D and C = W−1 DW ĝ(x) = H (g(x)) = ∗ g(x) = dy 1 1 1 1 1 1 1 1
πx π −∞ x−y
−1 −1
Z +∞
ĝ(y)
1 w1 w2 w3 w4 w5 w6 w7
g(x) = −H (ĝ(x)) = ∗ ĝ(x) = dy
πx π x−y W8 =
1 w2 w4 w6 1 w2 w4 w6
X2 (0) 0 0 ··· ··· −∞
.
..
0 X2 (1) 0 ···
1 w7 w6 w5 w4 w3 w2 w1
D=
0 0 X2 (2) 0 ··· Applying the Hilbert transform twice yields the original but negative
. .. function.
.. .
We show the derivations using permutation matrices on the left or on the
If we split up an imaginary part of the time-domain function in the same right side, meaning row or column permutations, respectively.
0 0 ··· · · · X2 (N − 1)
way,
We observe: the DFT matrix diagonalizes a Toeplitz matrix. uI (t) = uI e (t) + uI o (t) ,
W. Henkel, Jacobs University Bremen 48 W. Henkel, Jacobs University Bremen 51 W. Henkel, Jacobs University Bremen 54
7.1 Minimum-phase systems and Hilbert transform
uR (t)
If the transfer function of a stable and causal system is given by
uRe (t)
Let us abbreviate the DFT (Vandermonde) matrix as F (jω) = e−g(jω) = e−a(ω)−jb(ω) = e−a(ω) · e−jb(ω)
1 1 1 ··· ··· with
1 WN WN2 WN3 ···
g(jω) = a(ω) + jb(ω)
uRo (t)
1
Since g(λ) = − ln F (λ), poles and zeros of F will map into poles of g. Thus,
W = √ · 1 WN2 WN4 WN6 ···
N . to meet the stability criterion for g as well, poles and zeros of F have to be
..
in the left half plane, i.e., F has to be minimum phase.
1 WNN −1 WN
2(N −1)
··· WN
(N −1)(N −1) uRe (t) = sign(t) · uRo (t) , uRo (t) = sign(t) · uRe (t)
Then, we obtain the Hilbert transform between a(ω) and b(ω):
Z +∞ Z
−1 a(y) 2ω +∞ a(y)
1 b(ω) = dy = dy
W · W∗ = I −→ unitary sign(t) ◦––• π ω−y
−∞ π ω2 − y2 0
jπf Z Z
+∞ +∞
1 1 1 b(y) 2 y · b(y)
URe (f ) = ∗ jUI o (f ) , jUI o (f ) = ∗ URe (f ) a(ω) = dy = − dy ,
jπf jπf π −∞ ω−y π 0 ω2 − y2
since a(ω) is even and b(ω) is odd for a real time-domain function.
W. Henkel, Jacobs University Bremen 59 W. Henkel, Jacobs University Bremen 62 W. Henkel, Jacobs University Bremen 65
Some other FFTs
W4 I4 I4 I4
P8T · W8 = · · FFT Remarks
W4 D4 I4 −I4
Cooley-Tukey decimation in frequency / time
Small-Radix Cooley-Tukey presented for radix 2, also for radix 4 9.2 Overlap-add method
P4T
· P8T · W8 = Rader-Brenner specialized small-radix CT
P4T x1 (n) = {x(0), x(1), . . . , x(L − 1), 0, 0, . . . , 0}
Good-Thomas prime-factor alg., | {z }
M −1 zeros
W2 I2 I2 I2 uses Chinese remainder theorem
x2 (n) = {x(L), x(L + 1), . . . , x(2L − 1), 0, 0, . . . , 0}
W2 D2 I −I2 | {z }
2 Horner rule no FFT! M −1 zeros
· · ·
W2
I2
I2 I2
Goertzel alg. no FFT, low compl. when comp. a few DFT x3 (n) = {x(2L), x(L + 1), . . . , x(3L − 1), 0, 0, . . . , 0}
| {z }
W2 D2 I2 −I2 DFT by convolutions only useful for smaller sizes M −1 zeros
Bluestein
I4 I4 I4
· · Rader
D4 I4 −I4
Winograd small FFT
W. Henkel, Jacobs University Bremen 58 W. Henkel, Jacobs University Bremen 61 W. Henkel, Jacobs University Bremen 64
1 1 1 1 1 1 1 1
1 w2 w4 w6 1 w2 w4 w6
1 w4 1 w4 1 w4 1 w4
1 w6 w4 w2 1 w6 w4 w2
PT · W =
8 8
=
1 w1 w2 w3 w4 w5 w6 w7
1 w3 w6 w1 w4 w7 w2 w5
1 w5 w2 w7 w4 w1 w6 w3
1 w7 w6 w5 w4 w3 w2 w1
W4 W4
= with D4 = diag(1, w, w2 , w3 )
W 4 D4 w 4 W 4 D4
W. Henkel, Jacobs University Bremen 57 W. Henkel, Jacobs University Bremen 60 W. Henkel, Jacobs University Bremen 63
9 Fast convolutions using the FFT
These techniques are used to replace convolutions by multiplications in
I4 I4 I4 W4
W8 · P8 = · · DFT-domain — often called fast convolutions
I4 −I4 D4 W4
P4 I4 I4 I4 9.1 Overlap-save method
W8 · P8 · = · ·
P4 I4 −I4 D4 Let the length of a transversal filter be M .
I2 I2 I2 W2 x1 (n) = {0, 0, . . . , 0 , x(0), x(1), . . . , x(L − 1)}
| {z }
I2 −I2 D2 W2
M −1 points
· · ·
I2 I2
I2
W2
x2 (n) = {x(L − M + 1), . . . , x(L − 1), x(L), x(L + 1), . . . , x(2L − 1)}
| {z } | {z }
I2 −I2 D2 W2 M −1 data points from x1 (n) L new data points
x3 (n) = {x(2L − M + 1), . . . , x(2L − 1), x(2L), x(L + 1), . . . , x(3L − 1)}
| {z } | {z }
M −1 data points from x2 (n) L new data points
W. Henkel, Jacobs University Bremen 68 W. Henkel, Jacobs University Bremen 71 W. Henkel, Jacobs University Bremen 74
Properties N
X −1
N −1
Y =⇒ RN −j = − AN −i · Ri−j
(y − e−j2πl/N ) i=0
N
X −1
= X(k) ·
l=0;l6=k
(3) Tn+1 (x) − 2xTn (x) + Tn−1 (x) = 0 N
X
i=0
N · e+j2πk/N =⇒ AN −i · Ri−j = 0 with A0 = 1
⇐⇒ cos(n + 1)θ + cos(n − 1)θ = 2 cos θ cos nθ
i=0
We note that Chebyshev polynomials are cosines in disguise and thus possess an Furthermore, the average squared error will then be
N −1
Y equal-ripple property N −1 N −1 N −1
N e+j2πk/N = e−j2πk/N − e−j2πl/N = X X X
l=0;l6=k 1
T0 E{e2 } = R0 + 2 · AN −i · RN −i + AN −i · AN −k · Rk−i
T4 T1 i=0 i=0 k=0
T3 | {z }
N −1 NY
−1 0.5 −
PN −1
AN −i ·RN −i
= e−j2πk/N · 1 − e−j2πm/N i=0
N −1 N −1
| {z } m=1 0 X X
e+j2πk/N
| {z } = R0 + AN −i · RN −i = R0 + AN −i · R−(N −i)
∈R
i=0 i=0
−0.5
That the product will be real follows directly from the symmetry. We do T2
In the last step, we used the fact that the autocorrelation function is symmetric
not prove that it actually equals N . −1
−1 −0.5 0 0.5 1 (we assumed real variables, in the complex case Hermitian symmetry!).
W. Henkel, Jacobs University Bremen 67 W. Henkel, Jacobs University Bremen 70 W. Henkel, Jacobs University Bremen 73
Some additional steps to (5.1.39) of Proakis/Manolakis 10 Chebyshev polynomials 12 Linear prediction and the
We show that this es equivalent to the more common formulation of the Levinson-Durbin algorithm
Possible definition:
Lagrange interpolation.
Tn (x) = cos(n arccos x) Let the linear prediction of yN ∈ R be
The Fourier transform of a finite-duration sequences in terms of its DFT
We have to see that this is indeed a polynomial. N −1
was given as X
N −1 Moivre’s theorem: ŷN =− AN −i yi
1 − e−jωN X X(k) i=0
X(ω) =
N i=0
1 − e−j(ω−2πk/N ) cos nθ + j · sin nθ = (cos θ + j · sin θ)n PN −1
The prediction error will be e = yN + i=0 AN −i yi
We abbreviate y = e−jω and rewrite Expanding this and taking its real part and its expectation
!2
N −1 N −1
1 X −(y N − 1) n/2
X X
X(y) = X(k) · (1) cos nθ = C(n, 2k)(−1)k cosn−2k θ sin2k θ E{e2 } = E yN + AN −i yi
N i=0 1 − y · e+j2πk/N
k=0 i=0
N
Y −1
− (y − e−j2πl/N ) With sin2k θ = (1 − cos2 θ)k and The normal equations for the coefficients result from
N −1
1 X l=0 θ = arccos x , x = cos θ , cos(arccos x) = x
( N −1
! )
= X(k) · (2) ∂E{e2 } X !
N i=0 −e+j2πk/N (y − e−j2πk/N ) yN + =2·E AN −i yi · yj = 0
=⇒ cos(nθ) is a polynomial of degree n in cos θ ∂AN −j i=0
W. Henkel, Jacobs University Bremen 66 W. Henkel, Jacobs University Bremen 69 W. Henkel, Jacobs University Bremen 72
We finally obtain the more common formulation of the Lagrange 11 Butterworth filters
interpolation formula:
N −1
Correction of Fig. 8.37, Page 683.
Y
e−jω − e−j2πl/N Order: 4 Order: 5
N
X −1 1 1
l=0;l6=k
X(ω) = X(k) · N −1
Y
i=0 0.5 π π 0.5
e−j2πk/N − e−j2πl/N 2
+ 8 π
2
+ π
10
l=0;l6=k
| {z } 0 0
δ(ω,2πk/N ) poles of poles of
−0.5 H(s) H(−s) −0.5
H(s) H(−s)
The so-called Kronecker delta δ(ω, 2πk/N ) equals one when the frequency
matches the kth sample and zero elsewhere. −1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
W. Henkel, Jacobs University Bremen 77 W. Henkel, Jacobs University Bremen 80 W. Henkel, Jacobs University Bremen 83
Weighted linear combinations of both solution vectors are used to eliminate 14 Polyphase filter banks M
X −1
the discrepancies αm and βm yLP,k (m) = kλ
[xBP,λ (m) ∗ hLP,λ (m)] · WM
α α l α
α r λ=0
(1, Am,1 + Km Bm,m , . . . , Km ) · R = Rm + Km βm , 0, . . . , 0, αm + Km Rm
α r −1
With Km = −αm (Rm ) , we obtain
α
(1, Am+1,1 , . . . , Am+1,m+1 ) = (1, Am,1 , . . . , Am,m , 0)+Km (0, Bm,m , . . . , Bm,1 , 1)
−1 After the LP filter (”TP”, German Tiefpass)
β l
For the second solution vector, we obtain with Km = −βm Rm
ỹLP (n) xBP (n) · e−jΩ0 n ∗ hLP (n)
=
β
(Bm+1,m+1 , . . . , Bm+1,1 , 1) = (0, Bm,m , . . . , Bm,1 , 1)+Km (1, Am,1 , . . . , Am,m , 0) . X∞
l r
= xBP (ν) · e−jΩ0 ν · hLP (n − ν)
Rm and Rm follow the recursions ν=−∞
l l r −1 r r l −1 After the decimation (sampling-rate reduction)
Rm+1 = Rm − αm βm (Rm ) , Rm+1 = Rm − αm βm (Rm )
From the initialization of the LDA, we see yLP (m) = ỹLP (m · M )
X∞
l r
Rm = Rm . = xBP (ν) · e−jΩ0 ν · hLP (mM − ν)
ν=−∞
W. Henkel, Jacobs University Bremen 76 W. Henkel, Jacobs University Bremen 79 W. Henkel, Jacobs University Bremen 82
Upsampling (interpolation) by, e.g., a factor of 4 For M different center frequencies Ω0k = k · 2π/M , k = 0, 1, 2, . . . , M − 1
and abbreviating WM = exp(−j2π/M )
The Levinson-Durbin algorithm extends submatrices along the main LP ∞
X
fs1 k(mM −ν)
diagonal in a recursive fashion, starting from yLP,k (m) = xBP (mM − ν) · WM · hLP (ν)
ν=−∞
zero padding
(1) (R0 ) = (R0l ) = (R0r )
4 We substitute ν = pM − λ and obtain
Right-sided extension with zero yields fs1 fs2 =4· fs1 M −1 ∞
X X
LP fs2 = 4 · fs1 yLP,k (m) = xBP ([m − p]M + λ) · WM
k([m−p]M +λ)
· hLP (pM − λ)
l
(1, Am,1 , . . . , Am,m , 0) · Rm+1 = Rm , 0, . . . , 0, αm λ=0 p=−∞
Pm LP
If αm = Rm+1 + i=1 Am,i Rm−i+1 = 0, the solution of this sub-system Due to M
WM = 1,
would have been found. Otherwise, an auxiliary set of equations is used " #
M
X −1 ∞
X
fs2 = 4fs1 kλ
r
(0, Bm,m , . . . , Bm,1 , 1) · Rm+1 = (βm , 0, . . . , 0, Rm ) , yLP,k (m) = xBP,λ (m − p) · WM · hLP,λ (p)
λ=0 p=−∞
zero padding
which is also processed recursively. M
X −1
kλ
= [xBP,λ (m) ∗ hLP,λ (m)] · WM
λ=0
Note that zero-padding has no influence on the spectrum!
W. Henkel, Jacobs University Bremen 75 W. Henkel, Jacobs University Bremen 78 W. Henkel, Jacobs University Bremen 81
... Yule-Walker equations
13 Sampling-rate conversion
R0 R1 · · Rm
..
We first consider downsampling (decimation) by, e.g., a factor of 2.
R R0 . ·
−1 LP
(1, Am,1 , . . . , Am,m ) · .. .. .. l fs1
. . . = (Rm
· , 0, . . . , 0)
Making use of the commutativity of the convolution
.. ..
·
. . R1
LP ∞
X
R−m · · R−1 R0 yLP (m) = xBP (mM − ν) · e−jΩ0 (mM −ν) · hLP (ν)
| {z } fs1 ν=−∞
=: Rm
2
Toeplitz matrix, which is also Hermitian. The first component on the right
l
side is the error Rm = E{e2 }. The other equations result from the fs2 = fs1 /2
derivatives. We used m instead of N to indicate the iteratively increased
size until the error drops below a certain required value. fs2 = fs1 /2
W. Henkel, Jacobs University Bremen 86 W. Henkel, Jacobs University Bremen 89 W. Henkel, Jacobs University Bremen 92
Sub-sampling of the color
N −1
πk
X π(2n + 1)k 4:2:0 sampling 4:2:2 sampling
= ej 2N · 2 · sr (n) · cos
n=0
2N
A 2-dimensional DFT would then be
x −1 N
NX y −1 One possible definition of the DCT is (slightly different)
1 X −k n
S(kx , ky ) = p s(nx , ny ) · WN−k
x
x nx
· W Ny y y N −1
Nx · Ny nx =0 ny =0 X π(2n + 1)k
SDCT (k) = α(k) · s(n) · cos
kx = 0, 1, . . . , Nx − 1 , ky = 0, 1, . . . , Ny − 1 n=0
2N
x −1 N
NX y −1 k = 0, 1, . . . , N − 1
1 X k n
s(nx , ny ) = p S(kx , ky ) · WNkxxnx · WNyy y N −1
Nx · Ny k =0 k =0 X π(2n + 1)k
x y s(n) = α(k) · SDCT (k) · cos
2N
nx = 0, 1, . . . , Nx − 1 , ny = 0, 1, . . . , Ny − 1 k=0
n = 0, 1, . . . , N − 1
WNx = ej2π/Nx , WNy = ej2π/Ny
The labeling ny : nd1 : nd2 :
p1/N , k = 0
α(k) = p ny is the number of luminance samples, nd1 the number of chrominance samples
2/N , k = 1, . . . , N − 1
in odd lines, and nd2 the number of chrominance samples in even lines, all in 4
neighboring samples.
W. Henkel, Jacobs University Bremen 85 W. Henkel, Jacobs University Bremen 88 W. Henkel, Jacobs University Bremen 91
s(n) n = 0, 1, . . . , N − 1
sr (n) = 16 Some aspects of video coding
s(2N − 1 − n) n = N, N + 1, . . . , 2N − 1
15 Discrete Cosine Transform 1
2N
X −1
2π
YUV representation of PAL
Sr (k) = √ sr (n) · e−j 2N kn , k = 0, 1, . . . , 2N − 1
2N n=0
Y 0.299 0.587 0.114 R
We remember the one-dimensionsional DFT to be U = −0.147 −0.289
√ h i 0.436 · G
N −1 2π 2π
1 X 2N · Sr (k) = s(0) · e−j 2N k0 + e−j 2N k(2N −1) + · · · V 0.615 −0.515 −0.100 B
S(k) = √ s(n) · WN−kn , k = 0, 1, . . . , N − 1
N n=0 h 2π 2π
i
+s(1) · e−j 2N k1 + e−j 2N k(2N −2) + · · · YIQ representation of NTSC
N −1
1 X ···
s(n) = √ S(k) · WNkn , n = 0, 1, . . . , N − 1 h i Y 0.299 0.587 0.114 R
N n=0 2π 2π
+s(N − 1) · e−j 2N k(N −1) + e−j 2N kN I = 0.596 −0.275 −0.321 · G
WN = ej2π/N N −1 h i Q 0.212 −0.523 0.311 B
X 2π 2π
= sr (n) · e−j 2N kn + e−j 2N k(2N −1−n)
This is the orthonormal version with a unitary transform matrix. n=0
N
X −1 h i
2π 2π
= sr (n) · e−j 2N kn + e−j 2N k(2N −1−n) Pictures and tables in this section have been taken from Mehrholz, H., diploma thesis, Hochschule Bremen, Oct. 2003
n=0
W. Henkel, Jacobs University Bremen 84 W. Henkel, Jacobs University Bremen 87 W. Henkel, Jacobs University Bremen 90
Correspondingly, the 2D-DCT is defined as
The DFT assumes a periodicity in time and frequency domain. The DCT
(Discrete Cosine Transform) assumes a periodicity after mirroring the x −1 N
NX y −1
X π(2nx + 1)k
signal as shown in the lower figure. The period will then be 2N . SDCT (kx , ky ) = αkx αky · s(nx , ny ) · cos ·
nx =0 ny =0
2Nx
s(n) π(2ny + 1)k
· cos
2Ny
kx , ky = 0, 1, . . . , Nx/y −1
x −1 N
NX Xy −1
s(nx , ny ) = αkx αky · SDCT (kx , ky ) ·
0 N −1 kx =0 ky 0
sr (n) π(2nx + 1)kx π(2ny + 1)ky
· cos · cos
2Nx 2Ny
nx , ny = 0, 1, . . . , Nx/y − 1
p1/N , k
x/y = 0
N −1 2N − 1 αkx , αky = p
0 2/N , kx/y = 1, . . . , Nx/y − 1
N 2N
W. Henkel, Jacobs University Bremen 95 W. Henkel, Jacobs University Bremen 98 W. Henkel, Jacobs University Bremen 101
Runlength encoding
It compresses sequences with the same elements and consists of: ESC as a MPEG-2 MPEG-4 H.263
start symbol, the runlength r, and the repeated symbol si . Published 1995 1999 1995
Max. resolution 1920 x 1152 720 x 576 1408 x 1152
AC-coeff. start Standard resolution 720 x 576 720 x 576 352 x 288
Color format Y Cb Cr 4:2:0, 4:2:2:, 4:4:4 4:2:0
DC coeff. I: Intra frames, compression 1:5, original frames
Interlace yes yes no
P: Predicted frames, compression 1:24 Data rate max. 100 MBit/s 10 kbit/s bis 100 max. 30·64 kBit/s
Mbit/s
B: Bidirectional predicted frames, compression 1:60
Typical data rate 6500 kbit/s (720 x 880 kbit/s (720 x 576) 64 kBit/s (352 x 288)
576)
Video quality very gut betw. good and very betw. acceptable and
good good
Applications DVD video multimedia picture phone
AC-coeff. end
W. Henkel, Jacobs University Bremen 94 W. Henkel, Jacobs University Bremen 97 W. Henkel, Jacobs University Bremen 100
Quantization of DCT outputs
Original grey-scale data DCT coefficients
Frame structure in MPEG 2
Frame sequence for reproduction
M-JPEG DV MPEG-1
Published 1992 (based on ISO 1992
JPEG)
Max. resolution arbitrary 720 x 576 (fixed) 352 x 288
Standard resolution 720 x 576 (fixed) 352 x 288
Quantization table Quantized coefficients
Color format Y Cb Cr 4:2:0, 4:1:1, 4:2:2, 4:2:0 (PAL), 4:1:1 4:2:0
h i 4:4:4 (NTSC)
C(u,v)
q(u, v) = round Q(u,v) Interlace yes yes no
Data rate dep. on resolution around 28 MBits/s max. 3 MBit/s
C ′ (u, v) = q(u, v) · Q(u, v) Frame sequence for transmission (fixed)
Typical data rate 48 Mbit/s (720 x 576) around 28 MBits/s 1380 kbit/s (352 x
(fixed) 288)
Reconstructed coefficients Reconstructed grey-scale data
Video quality betw. bad and very very good betw. acceptable and
good good
Applications digital camera digital camcorder VCD
W. Henkel, Jacobs University Bremen 93 W. Henkel, Jacobs University Bremen 96 W. Henkel, Jacobs University Bremen 99
Block artefacts Motion estimation by block matching
Frame 1 Frame 2
Huffman encoding
Motion vector Macro block
The difference (prediction-error) between original frame and the predicted
one resulting from motion estimation will be encoded like an original frame,
i.e., DCT, quantization, ...
W. Henkel, Jacobs University Bremen 104 W. Henkel, Jacobs University Bremen 107 W. Henkel, Jacobs University Bremen 110
Effect of One’s complement representation
Example: 3-Bit integer + −
plus sign bit 0 0000 1111
1 0001 1110
2 0010 1101
3 0011 1100 Little warning: 16-bit or longer integers might have low and high bytes in a
Usually, a PSD is given in dBm/Hz, which is defined as different order depending on the processor/operating system, especially
4 0100 1011
with Windows vs. Linux! Hence, when porting a program in, e.g., C from
P SD[dBm/Hz] = 10 · log10 (P SD/1mW · 1Hz) = 10 · log10 (P SD/1mWs) . 5 0101 1010 Linux to Windows or vice versa, which deals with operations on individual
6 0110 1001 bytes of an integer, one might have to modify the code accordingly.
7 0111 1000
Advantage: symmetric ranges for positive and negative numbers
Disadvantage: double representation of zero, special carry and borrow
treatment,
see, e.g., https://en.wikipedia.org/wiki/Ones%27 complement
W. Henkel, Jacobs University Bremen 103 W. Henkel, Jacobs University Bremen 106 W. Henkel, Jacobs University Bremen 109
√ Reasoning: Adding a number and its One’s complement leads to all ones,
Using the unitary DFT with 1/ N in DFT and IDFT, from Parseval’s
adding another one for using Two’s complement leads to a carry, hence
theorem, we know that the average power is the same in time and DFT
10.00 · · · 0.
domain
1 1 X 2 1 1 X 2 1 1 X 2 1 1 X 2 Number range for 3-Bit numbers
· u = · U ⇐⇒ · u · ∆t = · U · ∆t .
R N i i R N i i R N i i R N i i
−4 100
Rewriting ∆t = 1/fs , using the sampling frequency fs , and the relation This means writing the One’s complement as
−3 101
1
∆t·N = 1/T = ∆f , T being the DFT duration, this leads to
B
X −2 110
1 · 20 + (1 − bi ) · 2−i = 2 − 2−B − |X| with X = 0.b1 b2 · · · bB
1 1 X 2 1 X 2 i=1 −1 101
P = · u = · U /fs · ∆f .
R N i i R i i 0 000
We observe that R1 · Ui2 /fs has the meaning of a PSD and a power per +1 001
frequency slot is then given by +2 010
1 +3 011
· Ui2 /fs · ∆f .
R
Properties: Single zero, but non-equal pos/neg ranges
W. Henkel, Jacobs University Bremen 102 W. Henkel, Jacobs University Bremen 105 W. Henkel, Jacobs University Bremen 108
17 The periodogram for PSD evaluation Signed integers: Two’s component representation
18 Number representations 1.b̄1 b̄2 · · · b̄B + 00 · · · 01
The periodogram is explained in Proakis/Manolakis without showing the
Unsigned integers Those are trivial. Adding two numbers in binary
link to correct units. This is added in the following: Since the difference to One’s complement is only the addition of 00 · · · 01,
representation might lead to a carry, which might to be treated by
For discrete frequencies spaced by ∆f and indexed by i, the power P from i.e., 2−B , this yields 2 − |X|.
representing the result as the maximum possible integer. A multiplication PB −i
a power spectral density (PSD) S(i) is determined by would lead to twice the word length, since We use 1 = i=1 bi · 2 + 2−B to obtain
X B B B
S(i) · ∆f .
P = 2N − 1 ∗ 2N − 1 = 22N − 2 · 2N + 1
X X X
bi · 2−i + 1 − 1 = −1 + (1 − bi ) · 2−i + 2−B = −1 + b̄i · 2−i + 2−B
i
i=1 i=1 i=1
when assuming an all-ones word to be multiplied.
Starting from time-domain samples ui , the average power would be A negative number is hence represented by a Two’s complement number.
1 1 X 2 Signed integers: Sign-magnitude format (usually not used):
P = · ui One may just subtract a number from 10.00 · · · 02 , e.g., 10.0002 − 0.1012
R N i 1.b1 b2 · · · bB for negative values with sign bit set to one meaning 2-0.625 in decimal numbers.
and the energy in an interval ∆t is given by 10.000
Signed integers: One’s complement (old-fashioned):
1 1 X 2 −0.101
E∆t =
· u · ∆t . 1.b̄1 b̄2 · · · b̄B for negative values with sign bit set to one
R N i i
1.011
W. Henkel, Jacobs University Bremen 113 W. Henkel, Jacobs University Bremen 116 W. Henkel, Jacobs University Bremen 119
Communications ... Sampled signal at the receiver
P∞ P∞
xr (t) = ν=−∞ xν h(t − νT ) · T · µ=−∞ δ(t − µT )
19 Nyquist impulse Sampling in time domain: P∞ P∞
−jωνT
P∞ ◦––• ν=−∞ xν H(jω)e ∗ 2π · µ=−∞ δ(ω − µω0 )
xs (t) = T· k=−∞ δ(t − kT ) R ∞ P∞ P∞
−jλνT
Baseband signal P∞ R T /2 = −∞ ν=−∞ xν H(jλ)e · 2π · µ=−∞ δ(ω − λ − µω0 )dλ
jkω0 t 1
= T· k=−∞ ck e mit ck = T −T /2
δ(t)e−jkω0 t dt = 1/T P∞ P∞ −j(ω−µ2π/T )νT
∞
= 2π · ν=−∞ µ=−∞ xν H(j(ω − µω0 ))e
X (Fourier series)
s(t) = Am g(t + Θ − mT ) P∞ = 2π ·
P∞ P∞
xν µ=−∞ H(j(ω − µω0 ))e−jνωT
jkω0 t ν=−∞
m=−∞ = k=−∞ e
P∞
Power density spectrum No inter-symbol interference, when µ=−∞ H(j(ω − µω0 )) = constant
∞
X ∞
X
ejkω0 t ◦––• 2π · δ(ω − kω0 )
S(jω) = T SA (ejωT )|G(jω)|2 = T σA
2
|G(jω)|2 k=−∞ k=−∞
SA and |G(jω)|2 are the contributions form the data sequence (line
coding!) and the transmit filter, respectively.
W. Henkel, Jacobs University Bremen 112 W. Henkel, Jacobs University Bremen 115 W. Henkel, Jacobs University Bremen 118
Arithmetic issues
Complex addition: two real additions The Nyquist criterion
Complex multiplication: four real multiplications We use the correspondence Transmit sequence
∞
X
Signal processors typically have an ALU (Algorithmic Logical Unit), a ∞
X ∞
X xs (t) = xν δ(t − νT )
multiplier, a barrel shifter. xs (t) = T · δ(t − kT ) ◦––• Xs (jω) = 2π · δ(ω − kω0 ) ν=−∞
k=−∞ k=−∞
Alternative: CORDIC algorithm or implementation instead of multipliers. Received signal after overall impulse response h(t)
Those are designed for rotations and essentially, acting on the phase only, a Proof: ∞ Z ∞ ∞
X X
multiplication is one CORDIC operation. ejω0 t ◦––• 2πδ(ω − ω0 ) xt (t) = xν h(τ )δ(t − τ − νT )dτ = xν h(t − νT )
X X ν=−∞ −∞ ν=−∞
Gustavson: UNUM interval artithmetic, ck ejkω0 t ◦––• 2π ck δ(ω − kω0 )
http://www.johngustafson.net/pubs/RadicalApproach.pdf k k
Computing with randomness, Alaghi and Hayes, IEEE spectrum 03/18
W. Henkel, Jacobs University Bremen 111 W. Henkel, Jacobs University Bremen 114 W. Henkel, Jacobs University Bremen 117
Example: ideal bandlimited channel
Floating-point
Single−precision floating−point format
31 30 22 16
1; |f | < B
Sign Exponent Fraction H(jf ) =
0; |f | ≥ B
Fraction (continued) The steps of time-discrete transmission:
15 0
sin(2πBt) P∞
1. Convolution of a Dirac sequence ν=−∞ xν δ(t − νT ) with the impulse
Double−precision floating−point format h(t) = 2B ·
63 62 51 48
2πBt response of the overall channel h(t) (including pulse shaping filter,
Sign Exponent
channel, receive filter)
Fraction 2. At the receiver: multiplication with a dirac sequence
P∞
T · µ=−∞ δ(t − µT ) for sampling
P∞
15 0
◦––• Convolution with the dirac sequence 2π · µ=−∞ δ(ω − µω0 )
Single precision: (−1)sign · 2exponent−127 · 1.fraction2
Double precision: (−1)sign · 2exponent−1023 · 1.fraction2
-5 -4 -3 -2 -1 0 1 2 3 4 5
Properties: High dynamics range, rough resolution for bigger numbers 1
(Time in units of T = 2B
)
W. Henkel, Jacobs University Bremen 122 W. Henkel, Jacobs University Bremen 125 W. Henkel, Jacobs University Bremen 128
20 Matched Filter Let S(f ) be the Fourier transform of the baseband pulse s(t). We compute
We now split the overall transfer function — and with it the noise power
density in a minimum and a maximum phase component
the signal-to-noise ratio in the sampling event at t = 0. In the numerator of
Let us consider a single impulse As(t) ◦––• AS(f ) (describing the overall 2
Sh,n = Ah,n ∗
· Wh,n (z) · Wh,n (1/z ∗ ) .
the following first equation we thus see the inverse Fourier transform
response of transmit filter plus channel) in noise with a power-density | {z } | {z }
determining the time-domain function after the filter response at t = 0. min. ph. max. ph.
spectrum N (f )
Squaring it thus leads to the signal power at the sampling instant.
In principle, both components could be inverted to function as a whitening
The maximum of the signal-to-noise ratio (SNR) is
2 filter, since |Wh,n (z)|2 | = |Wh,n
∗
(1/z ∗ )|2 for z = ejωT .
[N (f )]1/2 M (f ) S(f 1/2 )
R ∞
Z ∞ R∞
M (f )S(f )ej2πf 0 df
2 |A|2 −∞ df
|S(f )|2 | A −∞ | [N (f )]
S/N = |A|2 df S/N = = ∗
(1/z ∗ ) exactly mean? Let W (z) be minimum phase
R∞ R∞
N (f ) −∞
N (f )|M (f )|2 df −∞
N (f )|M (f )|2 df What does now Wh,n
−∞
and stable, i.e., have zeros and poles inside the unit circle. The argument
R∞ R∞ |S(f )|2
at it’s obtained when |A|2 N (f )|M (f )|2 df −∞ df
≤ −∞ N (f ) (1/z ∗ ) “mirrors” the zeros and poles to outside of the unit circle, i.e.,
S ∗ (f )
R∞
−∞
N (f )|M (f )|2 df
M (f ) = making it maximum phase and instable(?). Inverting this part as a
N (f )
R∞ |S(f )|2 whitening filter would interchange zeros and poles, but they will still be
if N (f ) = N0 /2 (constant) = A2 −∞ N (f )
df
outside the unit circle. However, an alternative way of describing this
m(t) = s∗ (−t) , da property is also to see it as non-causal and stable, instead of causal and
Z ∞
S ∗ (f ) instable. A sufficient delay will then make it causal again.
s(−t) = S(f )e−j2πf t df s∗ (−t) ◦––• S ∗ (f ) When M (f ) = N (f ) , equality is obtained =⇒ optimum!
−∞ The outer conjugation is just as in |W (jω)|2 = W (jω) · W ∗ (jω).
W. Henkel, Jacobs University Bremen 121 W. Henkel, Jacobs University Bremen 124 W. Henkel, Jacobs University Bremen 127
Since N (j(ω + ωc + m · 2π/T ) is real, the phase is thus determined by
S(j(ω + m · 2π/T )) · S ∗ (j(ω + m · 2π/T )), which is also real or, when
adding some delay, the phase will be linear. Thus, the matched filter leads
Eye pattern: 25 % und 100 % excess bandwidth to overall linear phase. The noise power density N (z = ejωT ) will be
For the proof, we need the so-called Schwarz inequality.
Z ∞ Z ∞ Z ∞ multiplied by |S(z)|2 /N 2 (z)|z=ejωT , yielding a total noise power density
|r(t)|2 dt · |s(t)|2 dt ≥ | r ∗ (t) · s(t)dt|2 after the matched filter and sampling
−∞ −∞ −∞
|S(z)|2
which is analog to N (z) z=ejωT
| < X, Y > | ≤ ||X|| · ||Y ||
This is the same function as the overall transfer function for the signal.
Thus, if we decide to whiten the noise, e.g., in order to use a Viterbi
algorithm with Euclidean metric for equalizing, we should require the
overall impulse response to be as short as possible and lead to a small
overall delay.
W. Henkel, Jacobs University Bremen 120 W. Henkel, Jacobs University Bremen 123 W. Henkel, Jacobs University Bremen 126
Raised cosine p(t) is a possible solution for f (t) :
Eliminating the (undesired) spectral shaping of the noise after sampling
sin(πt/T ) cos(απt/T )
p(t) = Signal spectrum after sampling
πt/T 1 − (2αt/T )2
The convolution with the impulse response of the ‘matched’ filter is 1
∞
X |S(j(ω + m · 2π/T ))|2
T ;
0 ≤ |ω| ≤ (1 − α)π/T actually a correlation and leads to the correlation receiver. Sh,m (ejωT ) =
T T m=−∞
N (j(ω + ωc + m · 2π/T ))
P (jω) = T π
2 1 − sin 2α |ω| − T ; (1 − α)π/T ≤ |ω| ≤ (1 + α)π/T Z ∞
s(t) ∗ m(t) =
s(τ ) · m(t − τ )dτ Effect of the matched filter (and sampling) on the noise power spectral
0; |ω| > (1 + α)π/T −∞
Z density described by
∞
t = 0, s(t) = A · s′ (t), m(t) = s′ (−t) =⇒ [A · s′ (τ )] · s′ (τ )dτ Sh,n = A2h,n · Wh,n (z) · Wh,n
∗
(1/z ∗ )
−∞
e−jωc t
(A is a factor that represents the data on the transmitted impulse.)
t = kT
S ∗ (jω) 1
N [j(ω+ωc )] 2
Ah,n ∗ (1/z ∗ )
·Wh,n
Sampler
W. Henkel, Jacobs University Bremen 131 W. Henkel, Jacobs University Bremen 134 W. Henkel, Jacobs University Bremen 137
Line coding Minimization of the mean squared error (MSE)
Generation of spectral nulls. The term is also partly used for normal E{|ǫ|2 } = E{|Ak − Âk |2 }
−7 −5 −3 −1 +1 +3 +5 +7
M -PAM. Then, one denotes it as, e.g., 2B1Q instead of 4-PAM.
Channel description:
A null at DC is obtained if the absolute value of the Running Digital Sum Average power of an M -PAM alphabet
L
(RDS) is limited. X
i
Point distance 2a vk = hn Ak−n + ηk hn : channel coeff., ηk : noise
X n=0
|RDS(i)| = Aj < RDS0
1
P a2
PM/2−1
|Ai |2 = + 1)2 =
j=−∞
P = M · i M/2 · j=0 (2j Behind the equalizer:
∞
X
One distinguishes block-oriented (mBnN : B=binary, N = N ary) and a2 M/2·(M 2 −1) (M 2 −1)
= M/2 · 3 = a2 · 3
Âk = cj vk−j
so-called partial-response methods. j=−∞
W. Henkel, Jacobs University Bremen 130 W. Henkel, Jacobs University Bremen 133 W. Henkel, Jacobs University Bremen 136
21 Baseband transmission Spectrum of an MMS43 code (ISDN BA, Germany) with a brickwall
impulse shape; in practice, a different shaping filter is applied for a better
X This, however amplifies the noise where the channel has low transfer
s(t) = Ai · g(t − iT ) limitation of the spectrum.
function, especially zeros.
i
3e-07
Noise power density
g(t): Pulse shaping N0
Ai : Data
2.5e-07
Snn (ω) =
|H(ejωT )|2
2e-07
Power spectral density Z π/T Z π/T
dω
PDS(f) / Ws
1.5e-07 σn2 = Snn (ω)dω = N0
S(jω) = T SA (ejωT )|G(jω)|2 = 2
T σA |G(jω)|2 −π/T −π/T |H(ejωT )|2
1e-07
1
=⇒ SNR = 1/σn2 = R π/T
Example: 8-PAM 5e-08
N0 dω
−π/T |H(ejωT )|2
0
0 1/T=120 kHz 2/T=240 kHz
f
−7 −5 −3 −1 +1 +3 +5 +7
W. Henkel, Jacobs University Bremen 129 W. Henkel, Jacobs University Bremen 132 W. Henkel, Jacobs University Bremen 135
Linear equalization
Examples:
AMI: Alternate Mark Inversion,
1 used for T1-lines in the US
instable max phase HDB3: High Density Bipolar 3 Code,
if non−causal instable if causal
used for primary rate ISDN (German: Primärmultiplex Pmxa)
min ph. 1
stable if causal ISDN 2,048 Mbit/s fixed lines
stable if non−causal MMS43: Modified Monitored Sum 4B3T code,
used for the Germany ISDN BA (Basic Access)
Peak-Distortion Criterion and Zero-Forcing Solution
with 160 kbit/s
Zero-Forcing means the complete elimination of inter-symbol interference
in contrast to 2B1Q in the rest of the world
by inverting the channel transfer function H(z), i.e., the equalizer transfer
“Carter” Code: block-inversion code with detection bit functions is
C(z) = 1/H(z)
W. Henkel, Jacobs University Bremen 140 W. Henkel, Jacobs University Bremen 143 W. Henkel, Jacobs University Bremen 146
Power spectral density of the errors:
MSE = E{e2i } = E{(yi − Ai )2 } = E (rTi c − Ai )2
SE = SA · |HC − 1|2 + N0 · |C|2
L x
∗
X l−j + N0 δlj |l − j| ≤ L
E vk−j vk−l = hn∗ hn+l−j + N0 δlj = with SY = SA · |H|2 + N0 ∂MSE ∂ ∂ 2 ∂ei
0 otherwise = E{e2i } = E e = 2 · E ei = 2 · E {ei ri }
n=0 ∂c ∂c ∂c i ∂c
SE = SY · |C − SA SY−1 H ∗ |2 + SA N0 SY−1
We replace the expectation by the current value
h−l∗
∗ −L ≤ l ≤ 0 Minimization of SE by eliminating the term with the squared absolute value
E Ak vk−l = LMS algorithm:
0 otherwise SA (z) · H ∗ (1/z ∗ ) c(i+1) = c(i) − β · ei · ri
=⇒ C(z) =
SA (z)H(z)H ∗ (1/z ∗ ) + N0
β: step size
W. Henkel, Jacobs University Bremen 139 W. Henkel, Jacobs University Bremen 142 W. Henkel, Jacobs University Bremen 145
∞
2 The LMS-(Least Mean Square) algorithm (gradient method)
X
E{ǫk2 } = E Ak −
cj vk−j −→ min
j=−∞ ri+M ri+M −1 ri ri−M
Alternative MSE derivation
T T T T
∞ 2 Nk
LEQ
∂ X
=⇒ E Ak − cj vk−j =0 Ak c−M c−M +1 c0 cM
∂cl
j=−∞ H(z) C(z) Slicer
Âk
This means, orthogonaly between the error ǫk and the signal realization − Σ
vk−l ensures minimality: Ak ISI
∗ H(z)C(z) − 1 yi
E{ǫk vk−l }=0 Ek
∞
X Ek
∗ M
E Ak − cj vk−j · vk−l =0 Nk
X
j=−∞
C(z) yi = cm ri−m = riT c (column vectors)
Noise
m=−M
∞
X ∗
∗
cj · E vk−j vk−l = E Ak vk−l
j=−∞ MSE = E{ei2 } = E{(yi − Ai )2 } = E (riT c − Ai )2
W. Henkel, Jacobs University Bremen 138 W. Henkel, Jacobs University Bremen 141 W. Henkel, Jacobs University Bremen 144
A more detailed look into the equivalence of both representations of SE :
An illustration of the minimization of a mean squared error by
orthogonality between error and signal
SE = SA · |HC − 1|2 + N0 · |C|2
C(z)[H(z)H ∗ (1/z ∗ ) + N0 ] = H ∗ (1/z ∗ ) = SA |H|2 |C|2 − SA HC − SA H ∗ C ∗ + SA + N0 |C|2
✣P H ∗ (1/z ∗ )
f~ aj φ~j − f~ C(z) =
H(z)H ∗ (1/z ∗ ) + N0 SE = SY · |C − SA SY−1 H ∗ |2 + SA N0 SY−1
|C|2 SY − CSA SY HSY − C ∗ SA SY−1 SY H ∗ + |SA |2 SY−1 |H|2 + SA N0 SY−1
∗ ∗
For N0 → 0 =⇒ C(z) = 1/H(z) (ZF) = ∗ −1
qφ~2 =
|C|2 SA |H|2 + N0 − CSA H − C ∗ SA H ∗ + |SA |2 SY−1 |H|2 + SA N0 SY−1
∗
φ~1✌ ⑦ | {z }
P ~
aj φ j SA
We used that SA ∈ ℜ and thus also SY ∈ ℜ
W. Henkel, Jacobs University Bremen 149 W. Henkel, Jacobs University Bremen 152 W. Henkel, Jacobs University Bremen 155
Tomlinson-Harashima Precoding
22 Decision-Feedback Equalization and alike
C(z) · 2M
A(z) T (z) Channel R(z) Detector Â(z)
r Linear r̂ F (z) mod 2M
Feed−Forward Tomlinson-Harashima Precoding
Filter
a Slicer The problem with DFE is a possible error propagation. At high SNR this AWGN
F (z) − 1 N (z)
Feedback can be neglected, not at low SNR. When channel coding is used, the
Filter
operating range is moved towards lower SNRs. Results after decoding Choose C(z) such that T (z) ∈ (−M, M ] (approximately equally
b
cannot be used for the feedback path due to the large decoding delay. A distributed)
A DFE consists of a linear feed-forward and a feedback filter. The feedback component is special interleaving may partly be a solution or computing the equalizer on
=⇒ P = M 2 /3 instead of P = (M 2 − 1)/3
there to eliminate (reduce) the remaining ISI. Thus, every path of a Viterbi decoder of a convolutional code.
bn = hntot , When the channel does not vary too much and a reverse channel is Drawback: High dynamics at the receiver
One may select C(z) according to other criteria. C(z) may, e.g., be chosen such that the
where htot denotes the total impulse response of transmit filter, channel, receive filter
available, a better alternative is Precoding = Relocating the feedback (DF) probability distribution will approximately be Gaussian (Shaping, Trellis-Precoding).
(matched filter). With the minus sign in the block diagram all postcursors will be eliminated. to the transmitter. Additionally, one can try to incorporate the receiver dynamics. Shaping allows the
The linear feed-forward filter equalizes the precursors. optimization according to different criteria.
Advantage: No noise in the feed-back path In the decoding of an error-correcting code at the receiver all points mod 2M are considered
Problem: Error propagation
to be equivalent.
W. Henkel, Jacobs University Bremen 148 W. Henkel, Jacobs University Bremen 151 W. Henkel, Jacobs University Bremen 154
In contrast to the LMS algorithm, on average, uncorrelated noise does not Noise-predictive DFE Tomlinson-Harashima Precoding
have an influence on the Zero-Forcing adaptation. Noise is not taken into Linear Assumption: Whitened Matched Filter is realized by the linear equalizer,
Feed−Forward
account which leads to noise enhancement at zeros of the channel transfer Filter
i.e., the remaining overall impulse response is “minimum phase”.
A1 (z) Slicer DFE
function. Zero forcing does only converge when the “eye” is already open. Feedback
C(z) · 2M
In this case, we have r ≈ A. Intuitively, it should thus converge. Filter
B1 (z) A(z) T (z) Channel R(z) Detector Â(z)
Let the estimated input be F (z) mod 2M
Feedback
X X Filter
Noise-Predictive
Âi = An qi−n + cn ni−n . B2 (z) DFE F (z) − 1 AWGN
N (z)
Linear
Feed−Forward
What does E {ei Ai } = 0 really mean for the overall impulse response? Filter
A2 (z) Noise Slicer
Does it actually lead to q0 = 1 and qi = 0 ∀ i 6= 0, the ZF criterion? Prediction
Filter
D(z) T (z) = A(z) − [F (z) − 1] · T (z) + 2M · C(z)
E{ei Ai−j } = E{(Ai − Âi )Ai−j } =
= E{Ai Ai−j } − E{Âi Ai−j } , j = −M, . . . , M The equivalence of both DFE descriptions:
√ A(z) + 2M · C(z)
! T (z) =
= δj0 − qj = 0 =⇒ qj = δj0 F (z)
A1 (z) = A2 (z) (1 + D(z))
We assumed Ai and ni to be uncorrelated i.i.d. ±1-information and noise B1 (z) = B2 (z) (1 + D(z)) − D(z)
R(z) = A(z) + 2M · C(z) + N (z)
sequences, respectively.
W. Henkel, Jacobs University Bremen 147 W. Henkel, Jacobs University Bremen 150 W. Henkel, Jacobs University Bremen 153
The Least-Mean Square (LMS) alg. for the DFE
Adaptation in the case of Zero Forcing Linear Feed-Forward Filter Feedback Filter
ri+M r
1 i+M1 −1
ri ri−M
2 r̂i−1 r̂i−N
Tomlinson-Harashima Precoding
Let hi be the samples of the channel impulse response, ci the coefficients of T T T T T T T A(z) T (z) Channel R(z) Detector Â(z)
mod 2M
the linear equalizer, and qi the overall impulse response. a−M a0 aM
F (z) mod 2M
1 2 b1 b2 bN
Zero Forcing: XM 1 i=0 AWGN
F (z) − 1 N (z)
qi = cm hi−m = Σ Σ
0 i 6= 0
m=−M
If there is no noise, Least Squares is equivalent to Zero Forcing. C(z) · 2M
ei r̂i
A(z) T (z) Channel R(z) Detector Â(z)
The Zero-Forcing adaptation rule (a) F (z) mod 2M
an+1 = an − αen rn
c(i+1) = c(i) − β · ei · Ai , i.e., bn+1 = bn
(b)
+ βen r̂n F (z) − 1 AWGN
N (z)
(i+1)
cj
(i)
= cj − β · ei · Ai−j If we combine both letting α = β:
(Assumption: monic F (z), i.e., F0 = 1)
(a)
is approaching the solution of E {ei Ai } = 0. It eliminates the correlation an+1 an rn
= − α · en · Transmit alphabet {±1, ±3, . . . , ±(M − 1)}
between error and information sequence. bn+1 bn −r̂n
(b)
W. Henkel, Jacobs University Bremen 158 W. Henkel, Jacobs University Bremen 161 W. Henkel, Jacobs University Bremen 164
Average power of a quadratic M -QAM constellation
23 Computation of error probabilities of 8-PSK and 16-QAM
00
11
00
11 00
11
00
11
3 00
11
00
11 00
11
00
11
1
0 00
11 00
11 00
11 00
11
1
0 0 0
1 0 1
1 1 1
0 0
1 11
00 11
00 11
00 11
00
1
0 1
0
M -PAM over an AWGN channel 1
0 1
0 1
0
1
0
1 0 0
1 11
1 0
1
1 0
1
1 0
1 00
11
00
11 00
11
00
11 00
00
11 00
11
00
11
1
0 1
0 0 0 0 1
0 11
00 11
00 00
11 11
00
1
0 1
0 1 0
0 1 0
0 1 1
0 1
0
1
0 1 1 0 −3
00
11
00
11
−1
00
11
00
11
1
00
11
−1 11
00
3
00
11
00
11
0
1 1
0 00
11 00
11 00
11 00
11
1
0 1
0 0 0
1 0 1
1 1 1
0 0
1 11
00 11
00 11
00 11
00
0
1 1
0 1 0 0
1
0 −3 11
−7 −5 −3 −1 +1 +3 +5 +7 Point distances 2a 00
11
00
11
11
00
00
11
00
11
11
00
00
00
11
11
00
00
11
00
11
11
00
000 001 011 010 110 111 101 100 Examples for using M -PSK and M -QAM √ √
M /2−1 M /2−1
1 X a2 X X
• PSK: Satellite communication with 4-PSK and coded 8-PSK, GSM P = · |Ai |2 = · (2i + 1)2 + (2j + 1)2 =
M i
M/4 i=0 j=0
Probability of exceeding the middle threshold for 2-PAM: mobiles with GMSK as a special case of offset-4-PSK, mobile phones in √ √
Japan with π/4-DQPSK a2 M
M /2 · (M − 1) 2 · (M − 1)
Z ∞ Z ∞
1 − (y+1)
2 = ·2· · = a2 ·
p = p(y| − 1)dy = p e 2σn2 dy M/4 2 3 3
0 0 2πσ 2 • QAM: Fixed wireless (16- up to 256-QAM), cable modems (16- and
Z ∞
1 q n q QAM: A doubling of the constellation means a power increase by 3 dB
2 256-QAM), 51.84-Mb LAN (CAP, Carrierless AM/PM), single-carrier
= √ e−t dt = 21 erfc Es
N0 = Q
2Es
N0 when preserving the minimum Euclidean distance.
π √1 variant of VDSL (Very high bit rate Digital Subscriber Line), V.34
2σ 2
PAM: A doubling of the constellation means a power increase by 6 dB
modem
when preserving the minimum Euclidean distance
W. Henkel, Jacobs University Bremen 157 W. Henkel, Jacobs University Bremen 160 W. Henkel, Jacobs University Bremen 163
Maximum-Likelihood Sequence Estimation
Ai Ai−1 Ai−2 30 Bandpass transmission
T T
√
0.5 1.0 0.5 M -PSK (Phase-Shift Keying) Spectral efficiency of M -QAM, M -PSK (and also M -PAM)
( )
Σ
√ X 2π
M 2 4 8 16 32 64
ri f (t) = 2·ℜ ej M ai · g(t − iT )ejω0 t
i
ρ / bit/s/Hz 0.5 1 1.5 2 2.5 3 B according to first spectral Null
ni
AWGN
M -QAM (Quadrature-Amplitude Modulation): ρ / bit/s/Hz 1 2 3 4 5 6 B = Nyquist bandwidth
+1/2.0 +1/2.0 +1/2.0
+1, +1 √ X
−1/1.0
+1/1.0
−1/1.0
+1/1.0
−1/1.0
+1/1.0 2·
f (t) = ai · g(t − iT ) · cos(ω0 t) − bi · g(t − iT ) · sin(ω0 t) Why are the spectral efficiencies of PAM and QAM the same?
+1, −1 i √
−1/0.0 −1/0.0 −1/0.0
The bandwidth for baseband transmission ( M -PAM) is only half of that of
+1/0.0 +1/0.0 +1/0.0
−1, +1 +1/ − 1.0 +1/ − 1.0 +1/ − 1.0 g(t): impulse shaping; ai , b i : data bandpass transmission (M -QAM). Only one dimension instead of two can be
√
−1/ − 1.0 −1/ − 1.0 −1/ − 1.0 utilized. We have taken this into account by writing
M . The number of bits is
−1, −1 Power spectral density
−1/ − 2.0 −1/ − 2.0 −1/ − 2.0 thus also halved for PAM.
Realization, e.g., with the M -Algorithm, a modification of the Viterbi algorithm that proceeds S(jω) = 2T SA (ejωT )|G(jω)|2 = 2 · σA
2
T |G(jω)|2
only with M paths.
W. Henkel, Jacobs University Bremen 156 W. Henkel, Jacobs University Bremen 159 W. Henkel, Jacobs University Bremen 162
Computation of error probabilities of M -PAM over an AWGN Bandwidth efficiency
channel
ρ = Rb /B = log2 (M )/B bit/s/Hz
Remarks
−7 −5 −3 −1 +1 +3 +5 +7 Partly, the bandwidth is used that corresponds to the first spectral Null,
• Realization of the linear equalizer as Fractionally-Spaced Equalizer 000 001 011 010 110 111 101 100 partly also to the so-called Nyquist bandwidth, which is only half as wide.
(usually T /2). Advantages: Realization of the matched filter, uncritical
Symbol-error probability for M -PAM: The real bandwidth will be in between.
symbol synchronization
r ! Spectrum shape with rectangular time-domain transmit pulse
• The equalizer can be implemented on all paths of a Viterbi algorithm. 1 Es 3
Ps = 2(M − 1) · erfc · 1
The contents of the equalizer shift registers are determined by the 2 N0 M 2 − 1
0.8
normalized PSD(f)
paths. 0.6
Bit-error probability when using a Gray code (obtained by iterative 0.4
mirroring, see figure) 0.2
0
Pb = Ps / log2 (M ) 2/T -1/T 0 1/T 2/T
f-f_c
W. Henkel, Jacobs University Bremen 167 W. Henkel, Jacobs University Bremen 170 W. Henkel, Jacobs University Bremen 173
Differential detection versus differential decoding
differential encoder differential decoder
yi−1 yi−1
T T
xi yi yi zi
Iterative coefficient adaptation
Least Mean Squares (LMS): Differential decoding may double the number of errors. This does only mean a
minor loss in SNR.
ci+1 = ci − βei ri∗ differential phase difference
computation
detection of
M -PSK
T
Zero Forcing (ZF): In the case of GMSK, additionally a Gaussian filtering is applied:
−
I 2 !
ci+1 = ci − βei a∗i
arctan mod 2π
Q (Q/I)
+
corr.
f log 2
H(f ) = exp −
In the case of differential incoherent detection, neighboring signal values are B3dB 2
compared, i.e., subtracted. This in turn means an addition of noise samples and r
thus a doubling of the noise power, if we assume white noise. Nevertheless, it 2π 2π 2 2 2
may be chosen if the channel does not allow for coherent reception due to fast h(t) = B3dB exp − B t
fluctuations. log 2 log 2 3dB
Differential detection leads to a loss of 3 dB.
W. Henkel, Jacobs University Bremen 166 W. Henkel, Jacobs University Bremen 169 W. Henkel, Jacobs University Bremen 172
QAM (Quadrature-Amplitude Modulation) and Offset-QPSK and MSK
CAP (Carrierless AM/PM)
Linear equalization Offset-QPSK:
QAM: X
Complex signal values and also complex coefficients √ X f (t) = I2i · g(t − 2iT ) − j · Q2i+1 · g(t − (2i + 1)T )
2· ai · g(t − iT ) · cos(ω0 t) − bi · g(t − iT ) · sin(ω0 t) i
yi = riT · c i ( )
√ X I and Q are changing time-interlaced by T , thereby reducing the possible
ℜ{yi } = ℜ{riT } · ℜ{c} − ℑ{riT } · ℑ{c} = 2·ℜ (ai + jbi )g(t − iT ) · ejω0 t phase changes to π/2 instead of π for usual QPSK. In return, only 1 bit/T
( i )
ℑ{yi } = ℜ{riT } · ℑ{c} + ℑ{riT } · ℜ{c} √ X (2 bit/2T ) is transmitted.
= 2·ℜ (ai + jbi )g(t − iT ) · ejω0 (t−iT ) · ejω0 iT
ℜ{c} + ℜ{yi }
ℜ{ri } √ X i
− = 2· ℜ {(ai + jbi ) [g(t − iT ) · cos (ω0 (t − iT )) +
ℑ{c} i
sin( π t)
+jg(t − iT ) · sin (ω0 (t − iT ))] · ejω0 iT 2T 0 ≤ t ≤ 2T
ℑ{c} MSK, for g(t) =
+ 0 otherwise
ℜ{c} CAP:
ℑ{ri } ℑ{yi }
+ √ X The trajectories follow sinusoids, The amplitude (envelope) will be constant
2· (ai · g(t − iT ) · cos (ω0 (t − iT )) − bi · g(t − iT ) · sin (ω0 (t − iT ))
i (CPM/CPFSK).
W. Henkel, Jacobs University Bremen 165 W. Henkel, Jacobs University Bremen 168 W. Henkel, Jacobs University Bremen 171
DFE
yi = riT · c − yi−1
T
·d π/4-QPSK
QAM transmitter and receiver
A QPSK variant with phase rota- Spectral regrowth due to
p ℜ{yi } = ℜ{rTi } · ℜ{c} − ℑ{rTi } · ℑ{c}
2/T cos(ω0 t)
T T
tions of π/4 at every clock cycle non-linearities
−ℜ{yi−1 } · ℜ{d} + ℑ{yi−1 } · ℑ{d}
TRANSMITTER I (TWT, Traveling Wave Tube)
+
ℑ{yi } = ℜ{riT } · ℑ{c} + ℑ{riT } · ℜ{c}
NRZ Demux Σ QAM
T T
+
−ℜ{yi−1 } · ℑ{d} − ℑ{yi−1 } · ℜ{d}
Q
p
ℜ{d}
2/T sin(ω0 t)
+ −
p
2/T cos(ω0 t)
ℜ{ri } ℜ{c} ℜ{yi }
RECEIVER
− +
WMF
received signal
ℑ{c} ℑ{d}
Mux
ℑ{c} ℑ{d}
WMF
+ −
p ℑ{ri } ℜ{c} ℑ{yi }
2/T sin(ω0 t) equalizer and slicer + −
ℜ{d}
W. Henkel, Jacobs University Bremen 176
Probability for exceeding the middle threshold of 2-PAM (see before):
Z ∞ Z ∞ (y+1)2
1 −
p = p(y| − 1)dy = p e 2σn2 dy
0 Z 0 2πσn2
∞ √ √
1 2
= √ e−t dt = 21 erfc SNR0 = Q 2 · SNR0
π √1
2σ 2
SNR0 has been written for Es /N0 , which should outline the we refer to the
distance of two neighboring points.
When referring to the actual average power of an M -QAM:
s !
1 Es 3
p= erfc ·
2 N0 2 · (M − 1)
W. Henkel, Jacobs University Bremen 175 W. Henkel, Jacobs University Bremen 178
31 Computation of error probabilities for
M -QAM under AWGN
Quadratic M -QAM constellations:
00
11
00
11
00 3111
11
00
11
000
111
000
00
11
00
11 1 h √ √ i
11
00
00
11
11
00
00
11
000
111
000
111
11
00
00
11 Ps = (M − 4 M + 4) · Ps(i) + 4( M − 2) · Ps(r) + 4 · Ps(e)
1000 1001 1101 1100
M
00
11 00 1111
11 000
111 00
11
00
11 00
11 000 00
11
11
00
1010
11
00
1011
000
111 1111
11
00
1110 Bit-error probability when using a Gray code (obtained by iterative
−3
00
11
00
11
−1 −1
00
11
00
11
000
111
000
111
1 3
00
11
00
11
mirroring, see figure)
11
00
00
11 00
11
00
11 000
111
111
000 11
00
00
11
0010 0011 0111 0110 Pb = Ps / log2 (M )
00
11 00 −3
11 000
111 00
11
00
11 00
11 000
111 00
11
11
00
0000
11
00
0001
000
111
0101
11
00
0100
Gray−codierte 16−QAM
W. Henkel, Jacobs University Bremen 174 W. Henkel, Jacobs University Bremen 177
Computation of error probabilities for M -QAM under AWGN
0000000001111
111111111
000000000
1111111110000
000000000
111111111
0000
1111
000000000
111111111
000000000
1111111110000
1111
000000000
111111111
000000000
1111111110000
1111
000000000
111111111
0000000001111
111111111
000000000
1111111110000
1111
000000000
111111111
000 0000
000000000
111111111
000000000
111111111
000 1111
000 111
111
000 111
111 111
000000000
111111111
000
111 0000
000
111
000
111000000000
111111111
000 1111
000
000 111
111 0000
000
111
000
111
000
111
000
111
000000000
111111111
000
111
000
111
000000000
111111111
000000000
1111111110000
1111
000000000
111111111
000000000
111111111
000000000
111111111
000
111
000
111
111
000
111
000 111
000
000
111
000 111
000000000
111111111
111 000
000
111000000000
111111111
000
111
000
111
000000000
111111111
000
111
0000000001111
0000
000 111
111 000 111
000 000
111
000000000
111111111
111111111 000000000
111111111
0000
1111
000000000
111111111
000000000
111111111
000
111
000 1111
000
111
000 111
111 0000
000
111
000
111000000000
111111111
000
111
000
111
000000000
111111111
000
111
000 1111
000
111
000 111
111
000000000
1111111110000
000
111
000
111000000000
111111111
0000
1111
000
111
000
111
000000000
111111111
000000000
111111111
111
000
111
000 1111
000000000
111111111
000 111
111
000
0000
000
111
000
111
000000000
111111111
0000
1111
000000000
111111111
000
111
000
111
000000000
111111111
000
111
000
111
000
111
000
111
111111111
0000000000000
1111
000
111
000
111000000000
111111111
0000
1111
000
111
000
111
000000000
111111111
Symbol-error probability for M -QAM:
(i)
Inner points: Ps = 4 · p − 4 · p2
(r)
Border points (without edges): Ps = 3 · p − 2 · p2
(e)
Edges: Ps = 2 · p − 1 · p2