21EC51 DC Module 2
21EC51 DC Module 2
Module-2
where is the channel noise. The receiver has the task of observing the received signal for a
duration of seconds and then making an estimate of the transmitted signal , or equivalently
the symbol, .
.
However, due to the presence of channel noise, the receiver will make occasional errors. The
requirement, therefore, is to design the receiver so as to minimize the average probability of
symbol error, defined as
̂|̂|
where and are the prior probabilities of transmitting symbols 1 and 0, respectively, and ̂ is the
estimate of the symbol 1 or 0 sent by the source, which is computed by the receiver. The ̂ |
and ̂ | are conditional probabilities.
To minimize the average probability of symbol error between the receiver output and the
symbol emitted by the source, in a generic setting that involves an M-ary alphabet whose
symbols are denoted by , we have:
1. To optimize the design of the receiver so as to minimize the average probability of
symbol error.
2. To choose the set of signals for representing the symbols , respectively, since this
choice affects the average probability of symbol error.
Geometric Representation of Signals
The essence of geometric representation of signals is to represent any set of energy signals {
} as linear combinations of orthonormal basis functions, where . Given a set of real-valued
energy signals, , each of duration seconds, we write
∫
The real-valued basis functions form an orthonormal set,
∫
where is the Kronecker delta.
∙ The first condition of eq. (6) states that each basis function is normalized to have unit
energy.
.
∙ The second condition states that the basis functions are orthogonal with respect to each
other over the interval .
For prescribed , the set of coefficients may be viewed as an N-dimensional signal vector,
denoted by . The important point to note here is that the vector bears a one-to-one relationship
with the transmitted signal :
a) Given the N elements of the vector as input, we may use the scheme shown in Figure
2.2 a to generate the signal , which follows directly from eq. (4). This figure consists
of a bank of multipliers with each multiplier having its own basis function followed
by a summer. The scheme of Figure 2.2a may be viewed as a synthesizer.
b) Given the signals , operating as input, we may use the scheme shown in Figure 2.2 b
to calculate the coefficients which follows directly from eq. (5). This second scheme
consists of a bank of N product integrators or correlators with a common input, and
with each one of them supplied with its own basis function. The scheme of Figure 2.2
b may be viewed as an analyzer.
Figure
2.2 (a) Synthesizer for generating the signal . (b) Analyzer for reconstructing the signal vector { }.
we may state that each signal in the set { } is completely determined by the signal vector
[ ]
Figure 2.3 Illustrating the geometric representation of signals for the case when N = 2 and M = 3.
‖‖ ∑
where is the jth element of and the superscript T denotes matrix transposition. There is an
interesting relationship between the energy content of a signal and its representation as a
vector. By definition, the energy of a signal of duration T seconds is
We have
∫
. Substituting in , we
get
∑
∫ *∑ ]
+
[∑
Interchanging the order of summation and integration, which we can do because they are both
linear operations, and then rearranging terms we get
∑∑
∫
‖‖
Thus, the energy of an energy signal is equal to the squared length of the corresponding
signal vector .
Note: In the case of a pair of signals and represented by the signal vectors and ,
respectively, we may also show that
The inner product of the energy signals and over the interval is equal to the inner product of
their respective vector representations and .
The relation involving the vector representations of the energy signals and is described by
‖‖∑ ∫( )
where ‖ ‖ is the Euclidean distance between the points represented by the signal vectors
and .
The angle subtended between two signal vectors and is, the cosine of the angle is equal to the
inner product of these two vectors divided by the product of their individual norms, given by
.
‖ ‖‖ ‖
The two vectors and are thus orthogonal or perpendicular to each other if their inner product
) )
(∫ (∫ ) (∫
The equality holds if, and only if, , where c is any constant. Proof: To prove this important
inequality, let and be expressed in terms of the pair of orthonormal basis functions and as
follows:
Substitute and
where and satisfy the orthonormality conditions over the time interval :
We may represent the signals and by the following respective pair of vectors, * + * +
Figure 2.4 Vector representations of signals and , providing the background picture for proving
the Schwarz inequality.
From Figure 2.4, the cosine of angle subtended between the vectors and is
‖ ‖‖ ‖
.
) (∫
Recognizing that | | ,
)
(∫ ∫
) (∫
or
)
(∫
∫ By squaring
(∫ (∫
above term, )
)
(∫
)
) (∫ ) (∫
we note that | | if, and only if, ; that is, , where c is an arbitrary constant.
Note: For complex-valued signals, Schwarz inequality is given by
|∫ ) (∫ | | )
| (∫ | |
where the asterisk denotes complex conjugation and the equality holds if, and only if, ,
where c is a constant.
√
√
where is the energy of the signal . We readily see that
∫
From above we have
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Example 2.1
Figure 2.5 displays the waveforms of four signals
a. Using the Gram–Schmidt orthogonalization procedure, find an orthonormal basis for this
set of signals.
b. Construct the corresponding signal-space diagram.
Solution:
Figure 2.5
In the above diagram, . Therefore, we need to find any three basis functions, for
independent signal . 1.
√ [ ]
[ ] ∫
∫ ∫
Now
√ √ [ ]
2.
√ [ ]
∫
[ ]
Therefore,
∫ (√ ) √ ( )
√
√ ∫
[ ]
(√ )
[ ]
Now,
√∫ √
√∫
√ [ ]
3.
[ ]
But
√
∫
.
[ ]
∫
∫ ∫ (√ )
Therefore,
[ ] (√ )
[ ]
√
∫ √
√∫
b. 1.
√ [ ]
.
√√
2.
∫
√ ∫
∫ (√ ) √ ∫
∫ √ ( )√
√ √
3.
∫
∫
∫ (√ ) √ ∫ √ ( )√
√ √
4. We know that
.
√ √ √
The signal space diagram is
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Example 2.2
a. Using the Gram–Schmidt orthogonalization procedure, find a set of orthonormal basis
functions to represent the three signals shown in Figure 2.6. b. Express each of these signals in
terms of the set of basis functions found in part a.
Figure 2.6
Solution:
In the above figure 2.6, all three signals are independent. Therefore we need to find three
orthonormal basis function sets for independent signals .
1.
.
∫
∫
Now
√ √
2.
√
∫
∫
Now,
Therefore,
∫
√
∫
√
∫
.
But
√
∫
3.
∫
∫
∫
Therefore, ∫
√∫
√∫
b.1.
.
√√
2.
∫ ∫
∫
∫
3.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
2B1Q Code:
The 2B1Q code is the North American line code for a special class of modems called digital
subscriber lines. This code represents a quaternary PAM signal as shown in the Gray-encoded
alphabet of Table 2.1.
Table 2.1 Amplitude levels of the 2B1Q code
Signal Amplitude Gray Code
-3 00
-1 01
+1 11
+3 10
The four possible signals , , , and are amplitude-scaled versions of a Nyquist pulse. Each
signal represents a dibit.
Let denote a pulse normalized to have unit energy. The so defined is the only basis function
for the vector representation of the 2B1Q code. The signal-space representation of this code
is as shown in Figure 2.7. It consists of four signal vectors , which are located on the -axis in
a symmetric manner about the origin. In this example, we have M = 4 and N = 1.
Figure 2.7 Signal-space representation of the 2B1Q code.
We may generalize the result depicted in Figure 2.7 for the 2B1Q code as follows: the signal
space diagram of an M-ary PAM signal, in general, is one-dimensional with M signal points
uniformly positioned on the only axis of the diagram.
Conversion of the Continuous AWGN Channel into a Vector Channel If we
where is a sample function of the white Gaussian noise process of zero mean and power
spectral density .
We find that the output of correlator , is the sample value of a random variable , whose
sample value is defined by
∫ ∫
∫ ∫
The first component, , is the deterministic component of due to the transmitted signal si(t), as
shown by
∫
.
The second component, , is the sample value of a random variable due to the channel noise ,
as shown by
Consider next a new stochastic process whose sample function is related to the received
signal as follows:
∑
We know that,
Therefore
The sample function , depends solely on the channel noise w(t). We may thus express the
received signal as
where is a remainder term that must be included on the right hand side of the above
equation to preserve equality. The expansion of the above equation is random (stochastic) due
to the channel noise at the receiver input.
Statistical Characterization of the Correlator Outputs
We develop a statistical characterization of the set of correlator outputs. Let denote the
stochastic process, a sample function of which is represented by the received signal . Let
denote the random variable whose sample value is represented by the correlator output .
According to the AWGN model, the stochastic process is a Gaussian process. It follows,
.
therefore, that is a Gaussian random variable for all . Hence, is characterized completely by
its mean and variance.
Let denote the random variable represented by the sample value produced by the correlator
b) The variance of is
∫
*∫ ∫
*∫
+
+
Interchanging the order of integration and expectation, which we can do because they
are both linear operations, we obtain
∫∫ ∫∫
where is the autocorrelation function of the noise process . Since this noise is
stationary, depends only on the time difference – . Since is white with a constant
power spectral density .
We may express as
∫∫
∫
has unit energy
c) The basic functions form an orthonormal set, and are mutually uncorrelated, as shown
by
[]
[]
[]
where
∫
∫
[ ] *∫ +
∫
[]∫∫ []∫∫
[] ∫∫
[] ∫∫
[] ∫∫ []
whose elements are independent Gaussian random variables with mean values equal to
and variances equal to .
Since the elements of the vector are statistically independent, we may express the conditional
probability density function of the vector , given that the signal or the corresponding symbol
was sent, as the product of the conditional probability density functions of its individual
elements;
| ∏ |
where the vector and scalar are sample values of the random vector and random variable ,
respectively. The vector is called the observation vector; correspondingly, is called an element
of the observation vector. A channel that satisfies above equation is said to be a memoryless
channel.
Each is a Gaussian random variable with mean and variance , we have
]{
(|) √
(|)
√ [ ]
For all , we have
| * ∑( )
+
Now noise term is to be considered. The noise process represented by is Gaussian with zero
mean. Similarly, the noise process represented by sample function is also zero mean
Gaussian process. Any random process derived from noise process by sampling it at time , is
statistically independent of the random variable .
[ ]{
.
Theorem of irrelevance:
The above equation states that random variable is irrelevant to the decision of particular
signal.
“As signal detection in AWGN is concerned, only the projections of the noise onto the basis
functions of the signal set affect the sufficient statistics of the detection problem; the
But
| * ∑( )
+
* ∑( )
+
∑( )
where we have ignored the constant term – since it bears no relation to the message symbol .
Optimum Receivers Using Coherent Detection
Maximum Likelihood Decoding
Figure 2.8 Illustrating the effect of (a) noise perturbation on (b) the location of the received signal point. 1.
Consider one of the M possible signals is transmitted every seconds duration with equal
probability, .
2. For geometric signal representation, the signal , is applied to a bank of correlators with
a common input and supplied with an appropriate set of N orthonormal basis
functions.
3. The resulting correlator outputs define the signal vector . We refer to this point as the
transmitted signal point, or message point. The set of message points corresponding to
5. The noise vector represents that portion of the noise that will interfere with the
detection process; the remaining portion of the noise , is tuned out by the bank of
7. Due to presence of noise, the received signal point wanders about the message point, as
shown in fig 2.8 (a).
Signal detection problem
Given the observation vector , perform a mapping from to an estimate ̂ of the transmitted
symbol, , in a way that would minimize the probability of error in the decision making
process.
1. Given the observation vector , make the decision ̂ . The probability of error in this
decision, which denote by | is
||
The requirement is to minimize the average probability of error in mapping given
observation vector into a decision. Therefore, state the optimum decision rule: “Set ̂ if
| | .” The decision rule described is referred to as the maximum a posteriori
probability (MAP) rule. The system used to implement this rule is called a maximum a
posteriori decoder.
2. We may express the above rule in terms of the prior probabilities of the transmitted
signals and the likelihood functions, using Bayes’ rule.
We may restate the MAP rule as follows:
“Set ̂ if
|
From the log-likelihood function, we note that attains its maximum value is
minimum for ”
“Observation vector lies in the region if ∑
Note we have used “minimum” as the optimizing condition because of the minus sign
in the log-likelihood function.
We have
∑ ‖‖
where ‖ ‖ is the Euclidean distance between the observation vector at the receiver
input and the transmitted signal vector .
Accordingly, we may restate the decision rule as
“Observation vector lies in region if Euclidean distance ‖ ‖ is minimum for ”
6. Above equation states that the maximum likelihood decision rule is simply to choose
the message point closest to the received signal point.
Consider
∑ ∑
∑ ∑
.
The first summation term of this expansion is independent of the index pertaining to
the transmitted signal vector , therefore, may be ignored.
The second summation term is the inner product of the observation vector and the
transmitted signal vector .
The third summation term is the transmitted signal energy
∑
We may reformulate the maximum-likelihood decision rule as:
) is maximum for ,
“Observation vector lies in region if (∑
Correlation Receiver
The optimum receiver for an AWGN channel and for the case when the transmitted signals
are equally likely is called a correlation receiver; it consists of two subsystems, which are
detailed in Figure 2.10:
1. Detector (Figure 2.10a), which consists of correlators supplied with a set of
orthonormal basis functions that are generated locally; this bank of correlators
operates on the received signal , to produce the observation vector .
2. Maximum-likelihood decoder (Figure 2.10b), which operates on the observation vector
to produce an estimate ̂ of the transmitted symbol , , in such a way that the average
probability of symbol error is minimized.
Figure 2.10 (a) Detector or
demodulator. (b) Signal transmission decoder.
In accordance with the maximum likelihood decision rule, the decoder multiplies the
elements of the observation vector by the corresponding elements of each of the signal
vectors . Then, the resulting products are successively summed in accumulators to form the
corresponding set of inner products | . Then, the inner products are corrected with the
transmitted signal energies. Finally, the largest one in the resulting set of numbers is selected,
and an appropriate decision on the transmitted message is made.
Matched Filter Receiver
The detector part in previous section involves a set of correlators. Alternatively, we may use
a different but equivalent structure in place of the correlators.
.
Consider a linear time-invariant filter with impulse response . With the received signal
operating as input, the resulting filter output is defined by the convolution integral;
∫
we evaluate this integral over the duration of a transmitted symbol, namely . We may replace
the variable with and go on to write
Consider next a detector based on a bank of correlators. The output of the correlator is
defined by
Equivalently, we may express the condition imposed on the desired impulse response of the
filter as
{ ||
||
where is the available channel bandwidth, represents finite delay, which we set to zero for
convenience, and is a constant gain factor, which we set to unity for convenience.
Thus, under the condition that the channel is distortion free and the bandwidth of is limited to
, we have
The matched filter at the receiver has a frequency response , and its output at the periodic
sampling times has the form
∑
.
where and is the output response of the matched filter to the input AWGN process .
The middle term on RHS of the above equation represents ISI. The amount of ISI and noise
that is present in the received signal can be viewed on an oscilloscope. We may display the
received signal on the vertical input with the horizontal sweep rate set at . The resulting
oscilloscope display is called an eye pattern because of its resemblance to the human eye.
Examples of two eye patterns, one for binary PAM and the other for quaternary (M = 4)
PAM, are illustrated in Figure 2.12.
(a)
(b)
Figure 2.12 Eye patterns. (a) Examples of eye patterns for binary and quaternary PAM and
(b) the effect of ISI on eye pattern.
.
The effect of ISI is to cause the eye to close, thereby reducing the margin for additive noise to
cause errors. Note that ISI distorts the position of the zero crossings and causes a reduction in
the eye opening. As a consequence, the system is more sensitive to a synchronization error
and exhibits a smaller margin against additive noise.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Example 2.3
Consider a binary PAM system that transmits data at a rate of bits/sec through an ideal
channel of band width . The sampled output from the matched filter at the receiver is
where , with equal probability. Determine the peak value of ISI and noise margin.
Solution: Given
The ISI is caused due to second term in eq (2). In eq (1) consider second and third term for
ISI. The peak value of ISI occurs when .
Therefore,
So the ISI term will take peak value of . Since , the ISI causes a 50% reduction in eye
opening at the sampling times . Hence, the noise margin is reduced by 50% to a value of 0.5.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Design of Bandlimited Signal for Zero ISI- The Nyquist Criterion
Consider a digital communication system that transmits through an ideal bandlimited
channel, when the bandwidth of is less than or equal to Hz.
.
Fourier transform of signal at the output of receiving filter is
where and denote the frequency responses of the transmitter and receiver filters and | |
denotes the frequency response of the channel. For convenience, we set and .
To remove the effect of ISI, it is necessary and sufficient that for and in the output of the
receiving filter equation given by
We can assume . The overall communication system has to be designed such that ,
This condition is known as the Nyquist pulse-shaping criterion or Nyquist condition for zero
ISI.
Nyquist Condition for Zero ISI
A necessary and sufficient condition for x(t) to satisfy
,
is that its Fourier transform must satisfy
∑( )
Consider the channel has a bandwidth of Hz. Then, for | | . Consequently, for | | Hz. We have
three cases:
{||
or , which results in, . This means that the smallest value of for which transmission
with zero ISI is possible is ; for this value, has to be a sinc function.
||
||
* ( (| | ))+ | | {
where is called the roll-off factor, which takes values in the range .
.
The bandwidth occupied by the signal beyond the Nyquist frequency is called the excess
bandwidth and is usually expressed as a percentage of the Nyquist frequency. For example,
when , the excess bandwidth is 50%; when , the excess bandwidth is 100%.
The pulse having the raised cosine spectrum is
( )
Note that is normalized so that .
Figure 2.16 illustrates the raised cosine spectral characteristics and the corresponding pulses
for .
The condition of zero ISI is for . However, suppose that we design the bandlimited signal to
have controlled ISI at one time instant. This means that we allow one additional nonzero
value in the samples . The ISI that we introduce is deterministic or “controlled”; hence, it
can be taken into account at the receiver.
One special case that leads to (approximately) physically realizable transmitting and
receiving filters is specified by the samples:
,
Prerequisite equations
Consider
∑( )
Where
But
,
If substituted in above equation (a), it yields,
[ ]||
( )||
{
Therefore, is given by
.
We note that the spectrum decays to zero smoothly, which means that physically realizable
filters can be designed to approximate this spectrum very closely. Thus, a symbol rate of is
achieved.
***************************************************************************
Another special case that leads to (approximately) physically realizable transmitting and
receiving filters is specified by the sample
This pulse and its magnitude spectrum are illustrated in Figure 2.18. It is called a modified
duobinary signal pulse.
It is interesting to note that the spectrum of this signal has a zero at , making it suitable for
transmission over a channel that does not pass DC.
**************************************************************************
Note: (a) We can obtain other interesting and physically realizable filter characteristics by
selecting different values for the samples and by selecting more than two nonzero samples.
(b) As we select more nonzero samples, the problem of unraveling the controlled ISI
becomes more cumbersome and impractical.
**************************************************************************
* In general, the class of bandlimited signals pulses that have the form
Their corresponding
∑( )
spectra
.
∑( )
||
{
||
These signals are called partial response signals when controlled ISI is purposely introduced
by selecting two or more nonzero samples from the set . The resulting signal pulses allow us
to transmit information symbols at the Nyquist rate of symbols/sec.
∫||
with ,
∫||
with is the additive Gaussian noise which has zero mean and has variance.
In general, takes possible equally spaced amplitude values with equal probability. In the
absence of ISI, the problem of evaluating the probability of error for digital PAM in a
bandlimited, additive white Gaussian noise channel is identical to the evaluation of error
probability for M-ary PAM.
It is given by,
( ) (√ )
.
But
( ) (√ )
partial response signal is split evenly between the transmitting and receiving filters, | | | | | |
For the duobinary signal pulse, , for and zero otherwise. Hence, the samples at the output of
the receiving filter have the form
Probability
If is the detected symbol from the signaling interval beginning at , its effect on , the received
signal in the mth signaling interval, can be eliminated by subtraction, thus allowing to be
detected. This process can be repeated sequentially for every received symbol.
.
The major problem with this procedure is that errors arising from the additive noise tend to
propagate. For example, if the detector makes an error in detecting , its effect on is not
eliminated; it is reinforced by the incorrect subtraction. Hence, the detection of is likely to be
in error.
Error propagation can be avoided by precoding the data at the transmitter instead of
eliminating the controlled ISI by subtraction at the receiver. The precoding is performed on
the binary data sequence prior to modulation.
From the data sequence of ones and zeros that is to be transmitted, a new sequence , called
the precoded sequence, is generated. For the duobinary signal, the precoded sequence is
defined as
The noise-free samples at the output of the receiving filter are given as
Consequently,
( )
Since , it follows that the data sequence is obtained from by using the relation
( )
If , and if , .
The received level for the mth transmission is directly related to , the data at the same
transmission time. Therefore, an error in reception of only affects the corresponding data ,
and no error propagation occurs.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Example 2.4
For the binary data sequence given as
1 1 1 0 1 0 0 1 0 0 0 1 1 0 1,
Determine the precoded sequence , the transmitted sequence , the received sequence , and the
decoded sequence .
.
Solution
By using the Equations,
( )
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In the presence of additive noise, the sampled outputs from the receiving filter are given by
Equation . In this case, is compared with the two thresholds set at and . The data sequence is
obtained according to the detection rule,
{ ||
The extension from binary PAM to multilevel PAM signaling using the duobinary pulses is
straightforward. In this case, the M-level amplitude sequence results in a (noise-free)
sequence
which has possible equally spaced levels. The amplitude levels are determined from the
relation,
where is the precoded sequence that is obtained from an -level data sequence according
to the relation
( )
.
Since , it follows that
( )
Here again, we see that error propagation has been prevented by using precoding.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Example 2.5
Consider the four-level data sequence
0 0 1 3 1 2 0 3 3 2 0 1 0,
which was obtained by mapping two bits into four-level symbols, i.e., 00 ➝0, 01➝1 , 10➝2,
and 11➝3. Determine the precoded sequence , the transmitted sequence , the received
sequence , and the decoded sequence .
Solution: By using Equations,
( )
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In presence of noise, the received signal-plus-noise is quantized to the nearest signal level and
the preceding rule is used on the quantized values to recover the data sequence.
In the case of the modified duobinary pulse, the controlled ISI is specified by the values
{
where the -level sequence is obtained by mapping a precoded sequence according to the
relation
.
And
From these relations, it is easy to show that the detection rule for receiving the data sequence
from in the absence of noise is
The precoding of the data at the transmitter makes it possible to detect the received data on a
symbol-by-symbol basis without having to look back at previously detected symbols. Thus,
error propagation is avoided.
Channel Equalization
Our objective was to design the filters for zero ISI at the sampling instants. This design
methodology is appropriate when the channel is precisely known and its characteristics do
not change with time.
In practice, we often encounter channels whose frequency-response characteristics are either
unknown or change with time. For example, in data transmission over the dial-up telephone
network, the communication channel will be different every time we dial a number because
the channel route will be different. Once a connection is made, the channel will be time
invariant for a relatively long period of time. This is an example of a channel whose
characteristics are unknown a priori.
Examples of time-varying channels are radio channels, such as ionospheric propagation
channels. These channels are characterized by time-varying frequency response
characteristics. When the channel characteristics are unknown or time varying, the
optimization of the transmitting and receiving filters, is not possible.
We may design the transmitting filter to have a square-root raised cosine frequency response,
,√ | |
||
and the receiving filter, with frequency response , to be matched to
Therefore, | | | |
Then, due to channel distortion, the output of the receiving filter is
where . The filter output may be sampled periodically to produce the sequence
.
∑
∑
where . The middle term on the right-hand side of the Equation represents the ISI.
In any practical system, it is reasonable to assume that the ISI affects a finite number of
symbols. Hence, we may assume that for and , where and are finite, positive integers. The ISI
observed at the output of the receiving filter may be viewed as being generated by passing the
data sequence through an FIR filter with coefficients , as shown in figure 2.18. This filter is
called the equivalent discrete-time channel filter.
Since its input is the discrete information sequence (binary or M-ary), the output of the
discrete-time channel filter may be characterized as the output of a finite-state machine with
states, corrupted by additive Gaussian noise. The noise-free output of the filter is described by
a trellis having states.
Linear Equalizers
For channels whose frequency-response characteristics are unknown and time invariant, we
may employ a linear filter with adjustable parameters, which are updated on a periodic basis
.
to compensate for the channel distortion. Such a filter, having parameters that are adjusted
periodically, is called an adaptive equalizer.
First, we consider the design characteristics for a linear equalizer from a frequency domain
viewpoint. Figure 2.20 shows a block diagram of a system that employs a linear filter as a
channel equalizer.
The demodulator consists of a receiving filter with the frequency response in cascade with a
channel equalizing filter that has a frequency response . Since is matched to and they are
designed so that their product satisfies Equation | || | , | | must compensate for the channel
distortion. Hence, the equalizer frequency response must equal to the inverse of the channel
response,
|| ||
where | | | | and the equalizer phase characteristic . In this case, the equalizer is said to be
the inverse channel filter to the channel response. We note that, the inverse channel filter
completely eliminates ISI caused by the channel. Since it forces the ISI to be zero at the
sampling times , the equalizer is called a zero forcing equalizer.
Hence, the input to the detector is of the form
∫| || |
∫ ||
||
in which is the power spectral density of the noise. When the noise is white, and the
variance becomes
.
∫| |
||
In general, the noise variance at the output of zero-forcing equalizer is higher than noise
variance at the output of optimum receiving filter | | for the case in which the channel is
known.
The time delay between adjacent taps may be selected as large as , the symbol interval, in
which case the FIR equalizer is called a symbol-spaced equalizer.
The input to the equalizer is the sampled sequence given by
When
, frequencies in the received signal that are above folding frequency are aliased into
frequencies below . In this case, the equalizer compensates for the aliased channel-distorted
signal.
.
When the time delay between adjacent taps is selected such that , no aliasing occurs; hence,
the inverse channel equalizer compensates for the true channel distortion. Since , the channel
equalizer is said to have fractionally spaced taps, and it is called a fractionally spaced
equalizer.
In practice, is often selected as . In this case, the sampling rate at the output of the filter is .
The impulse response of the FIR equalizer is
where are the equalizer coefficients and is chosen sufficiently large so that the equalizer spans
the length of the ISI, i.e., .
Since and is the signal pulse corresponding to , the equalized output signal pulse is
The zero-forcing condition can now be applied to the samples of taken at times . These
samples are
Since there are equalizer coefficients, we can control only sampled values of . Specifically,
we may force the conditions
∑ ,
Which may be expressed in matrix form as , where is a matrix with elements , is the
coefficient vector, and is the column vector with one nonzero element. Thus, we obtain a set
of linear equations for the coefficients of the zero-forcing equalizer (ZFE).
.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Example 2.6
Consider a channel-distorted pulse , at the input to the equalizer, given by the expression
( )
where is the symbol rate. The pulse is sampled at the rate and equalized by a zero forcing
equalizer. Determine the coefficients of a five-tap zero-forcing equalizer. Solution: The
zero-forcing equalizer must satisfy the equations
∑ ,
The matrix with elements is given as Note: Substitute and for and put it
in equation
[ ] [ ]
Then, the linear equations can be solved by inverting the matrix . Thus, we obtain
[ ]
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
.
The desired response sample at the output of the equalizer at is the transmitted symbol . The
error is defined as the difference between and . The mean square error between the actual
*( ∑
)
[∑ ∑ ][∑
]
∑
∑∑
and
the expectation is taken with respect to the random information sequence and the additive
noise.
.
The MMSE solution is obtained by differentiating MSE Equation with respect to the
equalizer coefficients . Thus, we obtain the necessary conditions for the MMSE as
∑
There are linear equations for the equalizer coefficients. These equations depend on the
statistical properties (the autocorrelation) of the noise as well as the ISI through the
autocorrelation .
These correlation sequences can be estimated by transmitting a test signal over the channel
and using the time average estimates
̂
∑
And
̂
∑