0% found this document useful (0 votes)
12 views

21EC51 DC Module 2

The document discusses signaling over additive white Gaussian noise channels. It covers topics like geometric representation of signals, Gram-Schmidt orthogonalization procedure, optimum receivers using coherent detection, and designing band-limited signals. Key aspects covered include representing signals as linear combinations of orthonormal basis functions, minimizing probability of symbol error, and relating energy of signals to lengths of corresponding vectors.

Uploaded by

adnagaprasad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

21EC51 DC Module 2

The document discusses signaling over additive white Gaussian noise channels. It covers topics like geometric representation of signals, Gram-Schmidt orthogonalization procedure, optimum receivers using coherent detection, and designing band-limited signals. Key aspects covered include representing signals as linear combinations of orthonormal basis functions, minimizing probability of symbol error, and relating energy of signals to lengths of corresponding vectors.

Uploaded by

adnagaprasad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

.

Module-2

Signalling Communication through Band Limited AWGN Channels:


Signalling over AWGN Channels- Introduction, Geometric representation of signals, Gram-
Schmidt Orthogonalization procedure, Conversion of the continuous AWGN channel into a vector
channel (without statistical characterization), Optimum receivers using coherent detection: ML
Decoding, Correlation receiver, matched filter receiver.
Signal design for Band limited Channels: Design of band limited signals for zero lSI-The Nyquist
Criterion (statement only), Design of band limited signals with controlled lSI-Partial Response
signals, Probability of error for detection of Digital PAM: Symbol-by-Symbol detection of data with
controlled lSI.

Signaling over AWGN Channels


The source output consists of a sequence of 1’s and 0’s, with each binary symbol being
emitted every seconds. The transmitting part of the digital communication system takes the
1’s and 0’s emitted by the source computer and encodes them into distinct signals denoted by
and , respectively, which are suitable for transmission over the analog channel.
Both and are real-valued energy signals, as shown by

Figure 2.1 AWGN model of a channel.


With the analog channel represented by an AWGN model, depicted in Figure 2.1, the
received signal is defined by

where is the channel noise. The receiver has the task of observing the received signal for a
duration of seconds and then making an estimate of the transmitted signal , or equivalently
the symbol, .
.

However, due to the presence of channel noise, the receiver will make occasional errors. The
requirement, therefore, is to design the receiver so as to minimize the average probability of
symbol error, defined as
̂|̂|
where and are the prior probabilities of transmitting symbols 1 and 0, respectively, and ̂ is the
estimate of the symbol 1 or 0 sent by the source, which is computed by the receiver. The ̂ |
and ̂ | are conditional probabilities.

To minimize the average probability of symbol error between the receiver output and the
symbol emitted by the source, in a generic setting that involves an M-ary alphabet whose
symbols are denoted by , we have:
1. To optimize the design of the receiver so as to minimize the average probability of
symbol error.
2. To choose the set of signals for representing the symbols , respectively, since this
choice affects the average probability of symbol error.
Geometric Representation of Signals
The essence of geometric representation of signals is to represent any set of energy signals {
} as linear combinations of orthonormal basis functions, where . Given a set of real-valued
energy signals, , each of duration seconds, we write

where the coefficients of the expansion are defined by


The real-valued basis functions form an orthonormal set,


where is the Kronecker delta.
∙ The first condition of eq. (6) states that each basis function is normalized to have unit
energy.
.
∙ The second condition states that the basis functions are orthogonal with respect to each
other over the interval .

For prescribed , the set of coefficients may be viewed as an N-dimensional signal vector,
denoted by . The important point to note here is that the vector bears a one-to-one relationship
with the transmitted signal :
a) Given the N elements of the vector as input, we may use the scheme shown in Figure
2.2 a to generate the signal , which follows directly from eq. (4). This figure consists
of a bank of multipliers with each multiplier having its own basis function followed
by a summer. The scheme of Figure 2.2a may be viewed as a synthesizer.
b) Given the signals , operating as input, we may use the scheme shown in Figure 2.2 b
to calculate the coefficients which follows directly from eq. (5). This second scheme
consists of a bank of N product integrators or correlators with a common input, and
with each one of them supplied with its own basis function. The scheme of Figure 2.2
b may be viewed as an analyzer.

Figure
2.2 (a) Synthesizer for generating the signal . (b) Analyzer for reconstructing the signal vector { }.
we may state that each signal in the set { } is completely determined by the signal vector

[ ]

If we extend conventional notion of two- and three-dimensional Euclidean spaces to an N


dimensional Euclidean space, we may visualize the set of signal vectors | as defining a
corresponding set of M points in an N-dimensional Euclidean space, with N mutually
perpendicular axes labeled . This N-dimensional Euclidean space is called the signal space.
Example: A two-dimensional signal space with three signals; that is, and .

Figure 2.3 Illustrating the geometric representation of signals for the case when N = 2 and M = 3.

In an N-dimensional Euclidean space, we define lengths of vectors and angles between


vectors.
The length (also called the absolute value or norm) of a signal vector by the symbol ‖ ‖. The
squared length of any signal vector is defined to be the inner product or dot product of with
itself, as shown by

‖‖ ∑

where is the jth element of and the superscript T denotes matrix transposition. There is an
interesting relationship between the energy content of a signal and its representation as a
vector. By definition, the energy of a signal of duration T seconds is

We have


. Substituting in , we
get

∫ *∑ ]
+
[∑

Interchanging the order of summation and integration, which we can do because they are both
linear operations, and then rearranging terms we get
∑∑

By definition, the form an orthonormal set which reduces to

‖‖
Thus, the energy of an energy signal is equal to the squared length of the corresponding
signal vector .
Note: In the case of a pair of signals and represented by the signal vectors and ,
respectively, we may also show that

The above equation states:

The inner product of the energy signals and over the interval is equal to the inner product of
their respective vector representations and .
The relation involving the vector representations of the energy signals and is described by

‖‖∑ ∫( )

where ‖ ‖ is the Euclidean distance between the points represented by the signal vectors
and .
The angle subtended between two signal vectors and is, the cosine of the angle is equal to the
inner product of these two vectors divided by the product of their individual norms, given by
.

‖ ‖‖ ‖

The two vectors and are thus orthogonal or perpendicular to each other if their inner product

is zero, in which case = 90°.


The Schwarz Inequality
Consider any pair of energy signals and . The Schwarz inequality states

) )
(∫ (∫ ) (∫

The equality holds if, and only if, , where c is any constant. Proof: To prove this important
inequality, let and be expressed in terms of the pair of orthonormal basis functions and as
follows:

Substitute and

where and satisfy the orthonormality conditions over the time interval :

We may represent the signals and by the following respective pair of vectors, * + * +

Figure 2.4 Vector representations of signals and , providing the background picture for proving
the Schwarz inequality.

From Figure 2.4, the cosine of angle subtended between the vectors and is
‖ ‖‖ ‖
.

) (∫
Recognizing that | | ,
)

(∫ ∫

) (∫
or
)
(∫

∫ By squaring
(∫ (∫
above term, )
)
(∫
)
) (∫ ) (∫

we note that | | if, and only if, ; that is, , where c is an arbitrary constant.
Note: For complex-valued signals, Schwarz inequality is given by

|∫ ) (∫ | | )
| (∫ | |

where the asterisk denotes complex conjugation and the equality holds if, and only if, ,
where c is a constant.

Gram–Schmidt Orthogonalization Procedure


The geometric representation of energy signals will be given by Gram–Schmidt
orthogonalization procedure, with the help of complete orthonormal set of basis functions. To
proceed with the formulation of this procedure,
1. We have a set of M energy signals denoted by . The first basis function[for ] is defined by

where is the energy of the signal .
Then, we have

where the coefficient √ and has unit energy.


.

2. Using the signal , we define the coefficient as


We may thus introduce a new intermediate function


is orthogonal to over the interval . The second basis
function is given as



where is the energy of the signal . We readily see that


From above we have

That is to say, and form an orthonormal pair as required. 3. Continuing


the procedure in this manner, we may, in general, define
∑ defined by

where the coefficients


For , the function reduces to . Given the , the set of


basis functions defined by

which form an orthonormal set.


.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Example 2.1
Figure 2.5 displays the waveforms of four signals
a. Using the Gram–Schmidt orthogonalization procedure, find an orthonormal basis for this
set of signals.
b. Construct the corresponding signal-space diagram.
Solution:

Figure 2.5
In the above diagram, . Therefore, we need to find any three basis functions, for
independent signal . 1.

√ [ ]

[ ] ∫
∫ ∫
Now

√ √ [ ]
2.
√ [ ]

New variable will be independent of . Therefore the time interval will be * +.

[ ]

Therefore,
∫ (√ ) √ ( )


√ ∫

[ ]

(√ )

[ ]
Now,

√∫ √

√∫

√ [ ]
3.

[ ]

But


.

[ ]

No common region in between the .

∫ ∫ (√ )

Integration limit is selected from common interval between . √ [ ] √

Therefore,

[ ] (√ )

[ ]

∫ √
√∫
b. 1.

√ [ ]
.

√√

2.

√ ∫

∫ (√ ) √ ∫

∫ √ ( )√

√ √
3.


∫ (√ ) √ ∫ √ ( )√

√ √
4. We know that
.

√ √ √
The signal space diagram is

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Example 2.2
a. Using the Gram–Schmidt orthogonalization procedure, find a set of orthonormal basis
functions to represent the three signals shown in Figure 2.6. b. Express each of these signals in
terms of the set of basis functions found in part a.
Figure 2.6
Solution:
In the above figure 2.6, all three signals are independent. Therefore we need to find three
orthonormal basis function sets for independent signals .
1.
.



Now

√ √

2.


New variable will be independent of . Therefore the time interval will be .


Now,
Therefore,




.

But

3.


Therefore, ∫

√∫
√∫

b.1.
.

√√

2.

∫ ∫

3.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
2B1Q Code:
The 2B1Q code is the North American line code for a special class of modems called digital
subscriber lines. This code represents a quaternary PAM signal as shown in the Gray-encoded
alphabet of Table 2.1.
Table 2.1 Amplitude levels of the 2B1Q code
Signal Amplitude Gray Code

-3 00

-1 01

+1 11

+3 10

The four possible signals , , , and are amplitude-scaled versions of a Nyquist pulse. Each
signal represents a dibit.
Let denote a pulse normalized to have unit energy. The so defined is the only basis function
for the vector representation of the 2B1Q code. The signal-space representation of this code
is as shown in Figure 2.7. It consists of four signal vectors , which are located on the -axis in
a symmetric manner about the origin. In this example, we have M = 4 and N = 1.
Figure 2.7 Signal-space representation of the 2B1Q code.
We may generalize the result depicted in Figure 2.7 for the 2B1Q code as follows: the signal
space diagram of an M-ary PAM signal, in general, is one-dimensional with M signal points
uniformly positioned on the only axis of the diagram.
Conversion of the Continuous AWGN Channel into a Vector Channel If we

consider as input to the bank of product integrators or correlators, then

where is a sample function of the white Gaussian noise process of zero mean and power
spectral density .
We find that the output of correlator , is the sample value of a random variable , whose
sample value is defined by

∫ ∫

∫ ∫

The first component, , is the deterministic component of due to the transmitted signal si(t), as
shown by


.

The second component, , is the sample value of a random variable due to the channel noise ,
as shown by

Consider next a new stochastic process whose sample function is related to the received
signal as follows:

We know that,

Therefore

The sample function , depends solely on the channel noise w(t). We may thus express the
received signal as

where is a remainder term that must be included on the right hand side of the above

equation to preserve equality. The expansion of the above equation is random (stochastic) due
to the channel noise at the receiver input.
Statistical Characterization of the Correlator Outputs
We develop a statistical characterization of the set of correlator outputs. Let denote the
stochastic process, a sample function of which is represented by the received signal . Let
denote the random variable whose sample value is represented by the correlator output .

According to the AWGN model, the stochastic process is a Gaussian process. It follows,
.
therefore, that is a Gaussian random variable for all . Hence, is characterized completely by
its mean and variance.

Let denote the random variable represented by the sample value produced by the correlator

in response to the white Gaussian noise component .


a) The mean of depends only on , as shown by

b) The variance of is

the random variable is defined by



Therefore


*∫ ∫
*∫
+
+

Interchanging the order of integration and expectation, which we can do because they
are both linear operations, we obtain

∫∫ ∫∫

where is the autocorrelation function of the noise process . Since this noise is
stationary, depends only on the time difference – . Since is white with a constant
power spectral density .
We may express as

∫∫

By using sifting property


has unit energy

c) The basic functions form an orthonormal set, and are mutually uncorrelated, as shown
by
[]

[]
[]
where



[ ] *∫ +

[]∫∫ []∫∫

[] ∫∫

[] ∫∫
[] ∫∫ []

Since the is Gaussian random variables it is also statistically independent.


The vector of random variables is given by
.
[ ]

whose elements are independent Gaussian random variables with mean values equal to
and variances equal to .
Since the elements of the vector are statistically independent, we may express the conditional
probability density function of the vector , given that the signal or the corresponding symbol
was sent, as the product of the conditional probability density functions of its individual
elements;

| ∏ |

where the vector and scalar are sample values of the random vector and random variable ,
respectively. The vector is called the observation vector; correspondingly, is called an element
of the observation vector. A channel that satisfies above equation is said to be a memoryless
channel.
Each is a Gaussian random variable with mean and variance , we have

]{
(|) √

(|)
√ [ ]
For all , we have

| * ∑( )
+
Now noise term is to be considered. The noise process represented by is Gaussian with zero

mean. Similarly, the noise process represented by sample function is also zero mean

Gaussian process. Any random process derived from noise process by sampling it at time , is
statistically independent of the random variable .

[ ]{
.

Theorem of irrelevance:
The above equation states that random variable is irrelevant to the decision of particular
signal.
“As signal detection in AWGN is concerned, only the projections of the noise onto the basis

functions of the signal set affect the sufficient statistics of the detection problem; the

remainder of the noise is irrelevant.”


Likelihood Function
In AWGN channel, we are going to find the observation vector given the transmitted message
symbol . However at the receiver we have exact opposite situation; we are given the
observation vector and the requirement is to estimate the message symbol that is responsible
for generating .
To emphasize this, we introduce the idea of a likelihood function, denoted by and defined by
|
It is important to recall that, and | have same mathematical form; their individual
meanings are quite different.
In practice, we use log-likelihood function, denoted by and defined by
where denotes the natural logarithm. The log-likelihood function bears a one-to one
relationship to the likelihood function for two reasons:
1. By definition, a probability density function is always nonnegative. It follows,
therefore, that the likelihood function is likewise a nonnegative quantity. 2. The
logarithmic function is a monotonically increasing function of its argument. The
log-likelihood function is given by

But
| * ∑( )
+

* ∑( )
+

∑( )

where we have ignored the constant term – since it bears no relation to the message symbol .
Optimum Receivers Using Coherent Detection
Maximum Likelihood Decoding

Figure 2.8 Illustrating the effect of (a) noise perturbation on (b) the location of the received signal point. 1.
Consider one of the M possible signals is transmitted every seconds duration with equal
probability, .
2. For geometric signal representation, the signal , is applied to a bank of correlators with
a common input and supplied with an appropriate set of N orthonormal basis
functions.
3. The resulting correlator outputs define the signal vector . We refer to this point as the
transmitted signal point, or message point. The set of message points corresponding to

the set of transmitted signals is called a message constellation.

4. The representation of the received signal is complicated by the presence of additive


noise . We note that when the received signal is applied to the bank of N correlators,
the correlator outputs define the observation vector , which differs from the signal
vector by the noise vector .

5. The noise vector represents that portion of the noise that will interfere with the

detection process; the remaining portion of the noise , is tuned out by the bank of

correlators and, therefore, irrelevant.


6. Based on the observation vector , we may represent the received signal by a point in the
same Euclidean space used to represent the transmitted signal. We refer to this second
point as the received signal point.
.

7. Due to presence of noise, the received signal point wanders about the message point, as
shown in fig 2.8 (a).
Signal detection problem
Given the observation vector , perform a mapping from to an estimate ̂ of the transmitted
symbol, , in a way that would minimize the probability of error in the decision making
process.
1. Given the observation vector , make the decision ̂ . The probability of error in this
decision, which denote by | is
||
The requirement is to minimize the average probability of error in mapping given
observation vector into a decision. Therefore, state the optimum decision rule: “Set ̂ if
| | .” The decision rule described is referred to as the maximum a posteriori
probability (MAP) rule. The system used to implement this rule is called a maximum a
posteriori decoder.
2. We may express the above rule in terms of the prior probabilities of the transmitted
signals and the likelihood functions, using Bayes’ rule.
We may restate the MAP rule as follows:
“Set ̂ if
|

where is the prior probability of transmitting symbol , | is the conditional probability


density function of the random observation vector given the transmission of symbol ,
and is the unconditional probability density function of .
3. We now note the following points:
∙ The denominator term is independent of the transmitted symbol; ∙ The prior
probability when all the source symbols are transmitted with equal probability;
∙ The conditional probability density function | bears a one-to-one relationship to
the log-likelihood function .
Accordingly, we may restate the decision rule in terms of as follows:
.

“Set ̂ if is maximum for ”.


The decision rule is known as the maximum likelihood rule. The system used for its
implementation is correspondingly referred to as the maximum likelihood decoder. 4. It is
useful to have a graphical interpretation of the maximum likelihood decision rule. Let
denote the -dimensional space of all possible observation vectors . We refer to this space
as the observation space. We have assumed that the decision rule must say ̂ , where , the
total observation space is correspondingly partitioned into -decision regions, denoted by .
We may restate the decision rule as
“Observation vector lies in the region if is maximum for ”. 5. The maximum likelihood
decision rule or its geometric counterpart described assumes that the channel noise is
additive. We next specialize this rule for the case when is both white and Gaussian.

From the log-likelihood function, we note that attains its maximum value is

minimized by the choice . We may


when the summation term ∑

formulate the maximum likelihood decision rule for an AWGN channel as is

minimum for ”
“Observation vector lies in the region if ∑
Note we have used “minimum” as the optimizing condition because of the minus sign
in the log-likelihood function.
We have

∑ ‖‖

where ‖ ‖ is the Euclidean distance between the observation vector at the receiver
input and the transmitted signal vector .
Accordingly, we may restate the decision rule as
“Observation vector lies in region if Euclidean distance ‖ ‖ is minimum for ”
6. Above equation states that the maximum likelihood decision rule is simply to choose
the message point closest to the received signal point.
Consider

∑ ∑
∑ ∑
.

The first summation term of this expansion is independent of the index pertaining to
the transmitted signal vector , therefore, may be ignored.
The second summation term is the inner product of the observation vector and the
transmitted signal vector .
The third summation term is the transmitted signal energy

Now equation becomes,



If RHS of above equation is divided by -2, we get


We may reformulate the maximum-likelihood decision rule as:

) is maximum for ,
“Observation vector lies in region if (∑

where is transmitted energy”


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The example in Figure 2.9 illustrates this statement for M = 4 signals and N = 2 dimensions,
assuming that the signals are transmitted with equal energy and equal probability.
Figure 2.9 Illustrating the partitioning of the observation space into decision regions for the case when
and ; it is assumed that the M transmitted symbols are equally likely.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
.

Correlation Receiver
The optimum receiver for an AWGN channel and for the case when the transmitted signals
are equally likely is called a correlation receiver; it consists of two subsystems, which are
detailed in Figure 2.10:
1. Detector (Figure 2.10a), which consists of correlators supplied with a set of
orthonormal basis functions that are generated locally; this bank of correlators
operates on the received signal , to produce the observation vector .
2. Maximum-likelihood decoder (Figure 2.10b), which operates on the observation vector
to produce an estimate ̂ of the transmitted symbol , , in such a way that the average
probability of symbol error is minimized.
Figure 2.10 (a) Detector or
demodulator. (b) Signal transmission decoder.
In accordance with the maximum likelihood decision rule, the decoder multiplies the
elements of the observation vector by the corresponding elements of each of the signal

vectors . Then, the resulting products are successively summed in accumulators to form the

corresponding set of inner products | . Then, the inner products are corrected with the
transmitted signal energies. Finally, the largest one in the resulting set of numbers is selected,
and an appropriate decision on the transmitted message is made.
Matched Filter Receiver
The detector part in previous section involves a set of correlators. Alternatively, we may use
a different but equivalent structure in place of the correlators.
.

Consider a linear time-invariant filter with impulse response . With the received signal
operating as input, the resulting filter output is defined by the convolution integral;

we evaluate this integral over the duration of a transmitted symbol, namely . We may replace
the variable with and go on to write

Consider next a detector based on a bank of correlators. The output of the correlator is
defined by

For , then we choose

Equivalently, we may express the condition imposed on the desired impulse response of the
filter as

We may state above equation as


“Given a pulse signal occupying the interval , a linear time-invariant filter is said to be
matched to the signal if its impulse response satisfies the condition ”
A time-invariant filter defined in this way is called a matched filter. Correspondingly, an
optimum receiver using matched filters in place of correlators is called a matched-filter
receiver. Such a receiver is depicted in Figure 2.11, shown below.
.
Figure 2.11 Detector part of matched filter receiver.
--------------------------------------------------------------------------------------------------
Signal Design for Bandlimited Channels
First the design will be done under the condition that there is no channel distortion. Consider ,
the condition for distortion-free transmission is that the frequency response characteristic of
the channel must have a constant magnitude and a linear phase over the bandwidth of the
transmitted signal.

{ ||
||
where is the available channel bandwidth, represents finite delay, which we set to zero for
convenience, and is a constant gain factor, which we set to unity for convenience.
Thus, under the condition that the channel is distortion free and the bandwidth of is limited to
, we have

The matched filter at the receiver has a frequency response , and its output at the periodic
sampling times has the form


.

where and is the output response of the matched filter to the input AWGN process .
The middle term on RHS of the above equation represents ISI. The amount of ISI and noise
that is present in the received signal can be viewed on an oscilloscope. We may display the
received signal on the vertical input with the horizontal sweep rate set at . The resulting
oscilloscope display is called an eye pattern because of its resemblance to the human eye.

Examples of two eye patterns, one for binary PAM and the other for quaternary (M = 4)
PAM, are illustrated in Figure 2.12.

(a)

(b)
Figure 2.12 Eye patterns. (a) Examples of eye patterns for binary and quaternary PAM and
(b) the effect of ISI on eye pattern.
.
The effect of ISI is to cause the eye to close, thereby reducing the margin for additive noise to
cause errors. Note that ISI distorts the position of the zero crossings and causes a reduction in
the eye opening. As a consequence, the system is more sensitive to a synchronization error
and exhibits a smaller margin against additive noise.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Example 2.3
Consider a binary PAM system that transmits data at a rate of bits/sec through an ideal
channel of band width . The sampled output from the matched filter at the receiver is
where , with equal probability. Determine the peak value of ISI and noise margin.
Solution: Given

From ISI, we have

Substituting in second term of RHS

Now comparing eq (1) and (3)

The ISI is caused due to second term in eq (2). In eq (1) consider second and third term for
ISI. The peak value of ISI occurs when .
Therefore,

So the ISI term will take peak value of . Since , the ISI causes a 50% reduction in eye
opening at the sampling times . Hence, the noise margin is reduced by 50% to a value of 0.5.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Design of Bandlimited Signal for Zero ISI- The Nyquist Criterion
Consider a digital communication system that transmits through an ideal bandlimited
channel, when the bandwidth of is less than or equal to Hz.
.
Fourier transform of signal at the output of receiving filter is

where and denote the frequency responses of the transmitter and receiver filters and | |

denotes the frequency response of the channel. For convenience, we set and .
To remove the effect of ISI, it is necessary and sufficient that for and in the output of the
receiving filter equation given by

We can assume . The overall communication system has to be designed such that ,

This condition is known as the Nyquist pulse-shaping criterion or Nyquist condition for zero
ISI.
Nyquist Condition for Zero ISI
A necessary and sufficient condition for x(t) to satisfy

,
is that its Fourier transform must satisfy

∑( )

Consider the channel has a bandwidth of Hz. Then, for | | . Consequently, for | | Hz. We have
three cases:

1. In this case, or, . Since ∑ ( )


consists

of nonoverlapping replicas of , which are separated by as shown in Figure 2.13, there is no


choice for to ensure in this case.

Figure 2.13 Plot of for the case


.
2. In this case, or, (the Nyquist rate). The replication of , separated by , are about to
overlap, as shown in Figure 2.14. It is clear that there exists only one that results in ,

{||

or , which results in, . This means that the smallest value of for which transmission
with zero ISI is possible is ; for this value, has to be a sinc function.

Figure 2.14 Plot of for the case


3. In this case, for , consists of overlapping replications of separated by , as shown in
Figure 2.15. In this case, there exists an infinite number of choices for , such that .

Figure 2.15 Plot of for the case


For the case, a particular pulse spectrum that has desirable spectral properties and has been
widely used in practice is the raised cosine spectrum.
The raised cosine frequency characteristic is given as

||

||

* ( (| | ))+ | | {
where is called the roll-off factor, which takes values in the range .
.

The bandwidth occupied by the signal beyond the Nyquist frequency is called the excess
bandwidth and is usually expressed as a percentage of the Nyquist frequency. For example,
when , the excess bandwidth is 50%; when , the excess bandwidth is 100%.
The pulse having the raised cosine spectrum is

( )
Note that is normalized so that .
Figure 2.16 illustrates the raised cosine spectral characteristics and the corresponding pulses
for .

Figure 2.16 Pulses having a raised cosine spectrum.


We note that for , the pulse reduces to and the symbol rate is . When , the symbol rate is . In

general, the tails of decay as for .

Design of Bandlimited Signals with Controlled ISI-Partial-Response Signals It is


necessary to reduce the symbol rate below the Nyquist rate of symbols/sec in order to realize
practical transmitting and receiving filters. On the other hand, suppose we choose to relax the
condition of zero ISI and, thus, achieve a symbol transmission rate of symbols/sec. By
allowing for a controlled amount of ISI, we can achieve this symbol rate.
.

The condition of zero ISI is for . However, suppose that we design the bandlimited signal to
have controlled ISI at one time instant. This means that we allow one additional nonzero
value in the samples . The ISI that we introduce is deterministic or “controlled”; hence, it
can be taken into account at the receiver.
One special case that leads to (approximately) physically realizable transmitting and
receiving filters is specified by the samples:

,
Prerequisite equations
Consider

∑( )

In terms of Fourier Series coefficient as

Where

But

Comparing (b) and (c)

Now from (d), we obtain

,
If substituted in above equation (a), it yields,

It is impossible to satisfy this equation for . However for , we obtain


{

[ ]||
( )||
{

Therefore, is given by
.

This pulse is called a duobinary signal pulse.

Figure 2.17 Time-domain and frequency-domain characteristics of a duobinary signal.

We note that the spectrum decays to zero smoothly, which means that physically realizable
filters can be designed to approximate this spectrum very closely. Thus, a symbol rate of is
achieved.
***************************************************************************
Another special case that leads to (approximately) physically realizable transmitting and
receiving filters is specified by the sample

The corresponding pulse is given as


and its spectrum is
,
||
[ ] ||
.

This pulse and its magnitude spectrum are illustrated in Figure 2.18. It is called a modified
duobinary signal pulse.

Figure 2.18 Time-domain and frequency-domain characteristics of a modified duobinary signal.

It is interesting to note that the spectrum of this signal has a zero at , making it suitable for
transmission over a channel that does not pass DC.
**************************************************************************
Note: (a) We can obtain other interesting and physically realizable filter characteristics by
selecting different values for the samples and by selecting more than two nonzero samples.
(b) As we select more nonzero samples, the problem of unraveling the controlled ISI
becomes more cumbersome and impractical.
**************************************************************************
* In general, the class of bandlimited signals pulses that have the form

Their corresponding
∑( )
spectra
.
∑( )

||
{

||
These signals are called partial response signals when controlled ISI is purposely introduced
by selecting two or more nonzero samples from the set . The resulting signal pulses allow us
to transmit information symbols at the Nyquist rate of symbols/sec.

Probability of Error for detection of Digital PAM


In this section, we evaluate the performance of the receiver for demodulation and detection of
an M-ary PAM signal in the absence of additive white Gaussian noise at its input. First, we
consider the case in which the transmitter and receiver filters and are designed for zero ISI.
Then, we consider the case in which is either a duobinary signal or a modified duobinary
signal.

Probability of Error for detection of Digital PAM with zero ISI


In the absence of ISI, the received signal sample at the output of the receiving matched filter
has the form

Considering the equation,

∫||
with ,

∫||

with is the additive Gaussian noise which has zero mean and has variance.

In general, takes possible equally spaced amplitude values with equal probability. In the
absence of ISI, the problem of evaluating the probability of error for digital PAM in a
bandlimited, additive white Gaussian noise channel is identical to the evaluation of error
probability for M-ary PAM.
It is given by,

( ) (√ )
.

But

And is the average energy/symbol and is the average energy/bit Hence,

( ) (√ )

Symbol-by-Symbol Detection of Data with controlled ISI


In this section, we describe a symbol-by-symbol method for detecting the information
symbols at the demodulator for PAM when the received signal contains controlled ISI. In
particular, we consider the detection of the duobinary and the modified duobinary partial
response signals. In both cases, we assume that the desired spectral characteristic for the

partial response signal is split evenly between the transmitting and receiving filters, | | | | | |

For the duobinary signal pulse, , for and zero otherwise. Hence, the samples at the output of
the receiving filter have the form

where is the transmitted sequence of amplitudes and is the sequence of additive


Gaussian noise samples.
Consider the binary case where with equal probability. Then, takes on one of three possible
values, namely, , with corresponding probabilities .

Probability
If is the detected symbol from the signaling interval beginning at , its effect on , the received
signal in the mth signaling interval, can be eliminated by subtraction, thus allowing to be
detected. This process can be repeated sequentially for every received symbol.
.

The major problem with this procedure is that errors arising from the additive noise tend to
propagate. For example, if the detector makes an error in detecting , its effect on is not
eliminated; it is reinforced by the incorrect subtraction. Hence, the detection of is likely to be
in error.
Error propagation can be avoided by precoding the data at the transmitter instead of
eliminating the controlled ISI by subtraction at the receiver. The precoding is performed on
the binary data sequence prior to modulation.
From the data sequence of ones and zeros that is to be transmitted, a new sequence , called
the precoded sequence, is generated. For the duobinary signal, the precoded sequence is
defined as

where the symbol denotes modulo-2 subtraction. Then, we set if , and if ,


i.e.,

The noise-free samples at the output of the receiving filter are given as

Consequently,

( )
Since , it follows that the data sequence is obtained from by using the relation

( )
If , and if , .
The received level for the mth transmission is directly related to , the data at the same
transmission time. Therefore, an error in reception of only affects the corresponding data ,
and no error propagation occurs.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Example 2.4
For the binary data sequence given as
1 1 1 0 1 0 0 1 0 0 0 1 1 0 1,
Determine the precoded sequence , the transmitted sequence , the received sequence , and the
decoded sequence .
.

Solution
By using the Equations,

Set if , and if (2)

( )

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In the presence of additive noise, the sampled outputs from the receiving filter are given by
Equation . In this case, is compared with the two thresholds set at and . The data sequence is
obtained according to the detection rule,

{ ||
The extension from binary PAM to multilevel PAM signaling using the duobinary pulses is
straightforward. In this case, the M-level amplitude sequence results in a (noise-free)
sequence

which has possible equally spaced levels. The amplitude levels are determined from the
relation,

where is the precoded sequence that is obtained from an -level data sequence according
to the relation

where the possible values of the sequence are .


In the absence of noise, the samples at the output of the receiving filter may be expressed as
Hence,

( )
.
Since , it follows that

( )
Here again, we see that error propagation has been prevented by using precoding.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Example 2.5
Consider the four-level data sequence
0 0 1 3 1 2 0 3 3 2 0 1 0,
which was obtained by mapping two bits into four-level symbols, i.e., 00 ➝0, 01➝1 , 10➝2,
and 11➝3. Determine the precoded sequence , the transmitted sequence , the received
sequence , and the decoded sequence .
Solution: By using Equations,

Set if , if , if , and if (2)

( )

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In presence of noise, the received signal-plus-noise is quantized to the nearest signal level and
the preceding rule is used on the quantized values to recover the data sequence.
In the case of the modified duobinary pulse, the controlled ISI is specified by the values
{

The noise-free sampled output from the receiving filter is given as

where the -level sequence is obtained by mapping a precoded sequence according to the
relation
.

And
From these relations, it is easy to show that the detection rule for receiving the data sequence
from in the absence of noise is

The precoding of the data at the transmitter makes it possible to detect the received data on a
symbol-by-symbol basis without having to look back at previously detected symbols. Thus,
error propagation is avoided.

Channel Equalization
Our objective was to design the filters for zero ISI at the sampling instants. This design
methodology is appropriate when the channel is precisely known and its characteristics do
not change with time.
In practice, we often encounter channels whose frequency-response characteristics are either
unknown or change with time. For example, in data transmission over the dial-up telephone
network, the communication channel will be different every time we dial a number because
the channel route will be different. Once a connection is made, the channel will be time
invariant for a relatively long period of time. This is an example of a channel whose
characteristics are unknown a priori.
Examples of time-varying channels are radio channels, such as ionospheric propagation
channels. These channels are characterized by time-varying frequency response
characteristics. When the channel characteristics are unknown or time varying, the
optimization of the transmitting and receiving filters, is not possible.
We may design the transmitting filter to have a square-root raised cosine frequency response,

,√ | |
||
and the receiving filter, with frequency response , to be matched to
Therefore, | | | |
Then, due to channel distortion, the output of the receiving filter is

where . The filter output may be sampled periodically to produce the sequence
.


where . The middle term on the right-hand side of the Equation represents the ISI.
In any practical system, it is reasonable to assume that the ISI affects a finite number of
symbols. Hence, we may assume that for and , where and are finite, positive integers. The ISI
observed at the output of the receiving filter may be viewed as being generated by passing the
data sequence through an FIR filter with coefficients , as shown in figure 2.18. This filter is
called the equivalent discrete-time channel filter.
Since its input is the discrete information sequence (binary or M-ary), the output of the
discrete-time channel filter may be characterized as the output of a finite-state machine with
states, corrupted by additive Gaussian noise. The noise-free output of the filter is described by
a trellis having states.

Figure 2.19 Equivalent discrete-time channel filter

Linear Equalizers
For channels whose frequency-response characteristics are unknown and time invariant, we
may employ a linear filter with adjustable parameters, which are updated on a periodic basis
.

to compensate for the channel distortion. Such a filter, having parameters that are adjusted
periodically, is called an adaptive equalizer.
First, we consider the design characteristics for a linear equalizer from a frequency domain
viewpoint. Figure 2.20 shows a block diagram of a system that employs a linear filter as a
channel equalizer.

Figure 2.20 Block diagram of a system with equalizer

The demodulator consists of a receiving filter with the frequency response in cascade with a
channel equalizing filter that has a frequency response . Since is matched to and they are
designed so that their product satisfies Equation | || | , | | must compensate for the channel
distortion. Hence, the equalizer frequency response must equal to the inverse of the channel
response,

|| ||
where | | | | and the equalizer phase characteristic . In this case, the equalizer is said to be
the inverse channel filter to the channel response. We note that, the inverse channel filter
completely eliminates ISI caused by the channel. Since it forces the ISI to be zero at the
sampling times , the equalizer is called a zero forcing equalizer.
Hence, the input to the detector is of the form

where is the noise component, which is zero-mean Gaussian with a variance

∫| || |

∫ ||

||
in which is the power spectral density of the noise. When the noise is white, and the
variance becomes
.
∫| |

||
In general, the noise variance at the output of zero-forcing equalizer is higher than noise
variance at the output of optimum receiving filter | | for the case in which the channel is
known.

Linear transversal filter


Let us now consider the design of a linear equalizer from a time-domain viewpoint. In real
channels, the ISI is limited to a finite number of samples, samples. The channel equalizer is
approximated by a finite-duration impulse response (FIR) filter, or transversal filter, with
adjustable tap coefficients as illustrated in Figure 2.21.

Figure 2.21 Linear transversal filter

The time delay between adjacent taps may be selected as large as , the symbol interval, in
which case the FIR equalizer is called a symbol-spaced equalizer.
The input to the equalizer is the sampled sequence given by

When
, frequencies in the received signal that are above folding frequency are aliased into
frequencies below . In this case, the equalizer compensates for the aliased channel-distorted
signal.
.

When the time delay between adjacent taps is selected such that , no aliasing occurs; hence,
the inverse channel equalizer compensates for the true channel distortion. Since , the channel
equalizer is said to have fractionally spaced taps, and it is called a fractionally spaced
equalizer.
In practice, is often selected as . In this case, the sampling rate at the output of the filter is .
The impulse response of the FIR equalizer is

and the corresponding


frequency response is

where are the equalizer coefficients and is chosen sufficiently large so that the equalizer spans
the length of the ISI, i.e., .
Since and is the signal pulse corresponding to , the equalized output signal pulse is

The zero-forcing condition can now be applied to the samples of taken at times . These
samples are

Since there are equalizer coefficients, we can control only sampled values of . Specifically,
we may force the conditions

∑ ,

Which may be expressed in matrix form as , where is a matrix with elements , is the
coefficient vector, and is the column vector with one nonzero element. Thus, we obtain a set
of linear equations for the coefficients of the zero-forcing equalizer (ZFE).
.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Example 2.6

Consider a channel-distorted pulse , at the input to the equalizer, given by the expression

( )
where is the symbol rate. The pulse is sampled at the rate and equalized by a zero forcing
equalizer. Determine the coefficients of a five-tap zero-forcing equalizer. Solution: The
zero-forcing equalizer must satisfy the equations

∑ ,

The matrix with elements is given as Note: Substitute and for and put it
in equation

The coefficient vector and the vector are given as

[ ] [ ]
Then, the linear equations can be solved by inverting the matrix . Thus, we obtain

[ ]

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
.

Minimum Mean-Square-Error (MMSE)


One drawback to the zero-forcing equalizer is that it ignores the presence of additive noise.
As a consequence, its use may result in significant noise enhancement.
A channel equalizer that is optimized based on the minimum mean-square-error (MMSE)
criterion accomplishes the desired goal.
Let us consider the noise-corrupted output of the FIR equalizer, which is

where is the input to the equalizer, which is given by

The output is sampled at times . Thus, we obtain

The desired response sample at the output of the equalizer at is the transmitted symbol . The
error is defined as the difference between and . The mean square error between the actual

output sample and the desired values is

*( ∑
)

[∑ ∑ ][∑

]

∑∑

where the correlations are defined as

and

the expectation is taken with respect to the random information sequence and the additive
noise.
.
The MMSE solution is obtained by differentiating MSE Equation with respect to the
equalizer coefficients . Thus, we obtain the necessary conditions for the MMSE as

There are linear equations for the equalizer coefficients. These equations depend on the
statistical properties (the autocorrelation) of the noise as well as the ISI through the
autocorrelation .
These correlation sequences can be estimated by transmitting a test signal over the channel
and using the time average estimates

̂

And

̂

in place of the ensemble averages to solve for the equalizer coefficients.

You might also like