ECT 305
ANALOG AND DIGITAL COMMUNICATION
Module-4
G-S Procedure and Effects in the Channel
Prepared By
Sithara Jeyaraj
Assistant professor
Dept of ECE,MACE
1
Syllabus
• Gram-Schmitt procedure.
• Signal space.
• Baseband transmission through AWGN channel.
• Mathematical model of ISI. Nyquit criterion for zero ISI.
• Signal modeling for ISI, Raised cosine and Square-root raised cosine spectrum,
• Partial response signalling and duobinary coding.
• Equalization.
• Design of zero forcing equalizer.
• Vector model of AWGN channel.
• Matched filter and correlation receivers. MAP receiver,
• Maximum likelihood receiver and probability of error.
• Capacity of an AWGN channel
• (Expression only) -- significance in the design of communication schemes..
2
Baseband transmission through AWGN channel.
3
Baseband transmission through AWGN channel.
•
4
Baseband transmission through AWGN channel.
•
5
Intersymbol Interference
•
6
Intersymbol Interference
•
7
Intersymbol Interference
•
8
Intersymbol Interference
9
Nyquist’s Criterion For Distoritonless Baseband Binary Transmission
10
Nyquist’s Criterion For Distoritonless Baseband Binary Transmission
11
Nyquist’s Criterion For Distoritonless Baseband Binary Transmission
12
Ideal Solution
•
13
Ideal Solution
• The special value of the bit rate Rb = 2W is called the Nyquist rate and W is itself called the
Nyquist bandwidth.
• Correspondingly, the baseband pulse p(t) for distortionless transmission is called the ideal Nyquist
pulse, ideal in the sense that the bandwidth requirement is one half the bit rate
• Figure 8.4 shows plots of P(f) and p(t).
• In part a of the figure, the normalized form of the frequency function P(f) is plotted for positive and
negative frequencies.
• In part b of the figure, we have also included the signaling intervals and the corresponding
centered sampling instants.
• The function p(t) can be regarded as the impulse response of an ideal low-pass filter with
passband magnitude response 1/2W and bandwidth W.
• The function p(t) has its peak value at the origin and goes through zero at integer multiples of the
bit duration Tb.
14
Ideal Solution
15
Ideal Solution
•
16
Practical Difficulties
•
17
Ideal Solution
18
Raised-Cosine Spectrum
• We may overcome the practical difficulties encountered with the ideal Nyquist pulse by
extending the bandwidth from the minimum value W = Rb/2 to an adjustable value
between W and 2W.
• In effect, we are trading off increased channel bandwidth for a more robust signal design
that is tolerant of timing errors.
• Specifically, the overall frequency response P(f) is designed to satisfy a condition more
stringent than that for the ideal Nyquist pulse, in that we retain three terms of the
summation on the left-hand side of equation E and restrict the frequency band of interest
to [–W, W], as shown by
• where, on the right-hand side, we have set Rb = 1/2W
• We may now devise several band-limited functions that satisfy above equation.
19
Raised-Cosine Spectrum
•
20
Raised-Cosine Spectrum
•
21
Raised-Cosine Spectrum
22
Raised-Cosine Spectrum
• The time response p(t) is naturally the inverse Fourier transform of the frequency response P( f ).
Hence, transforming the P(f) defined in (8.22) into the time domain, we obtain
23
Raised-Cosine Spectrum
24
Raised-Cosine Spectrum
25
Square-Root Raised-Cosine Spectrum
• A more sophisticated form of pulse shaping uses the square-root raised-cosine (SRRC)spectrum
rather than the conventional RC spectrum of Equation A.
• Specifically, the spectrum of the basic pulse is now defined by the square root of the right-hand
side of this equation.
• Thus, using the trigonometric identity
• To avoid confusion, we use G( f ) as the symbol for the SRRC spectrum, and so we may write
26
Square-Root Raised-Cosine Spectrum
27
Square-Root Raised-Cosine Spectrum
28
Square-Root Raised-Cosine Spectrum
29
Square-Root Raised-Cosine Spectrum
30
Partial response signalling
• Thus far we have treated ISI as undesirable phenomenon that produces a degradation in system
performance.
• By adding ISI to the transmitted signal in a controlled manner it is possible to achieve a bit rate of
2W bits per second in a channel bandwidth W Hertz
• Such schemes are called correlative coding or partial-response signaling schemes
• Since ISI introduced into the transmitted signal is known, its effect can be interpreted at the
receiver in a deterministic way.
• Thus correlative level coding may be regarded as a practical method of achieving the theoretical
maximum signalling rate of 2W symbols per second in a bandwidth of W Hertz as postulated by
Nyquist using realizable filters
31
Duobinary Signalling
•
32
Duobinary Signalling
33
Duobinary Signalling
•
34
Duobinary Signalling
35
Duobinary Signalling
36
Duobinary Signalling
37
Duobinary Signalling
38
Duobinary Signalling
39
Duobinary Signalling
40
Duobinary Signalling
41
Duobinary Signalling
42
Duobinary Signalling
43
Duobinary Signalling
44
Duobinary Signalling
45
Duobinary Signalling
46
Chapter 5: Signal Space Analysis
Conversion of the Continuous AWGN Channel into a
Vector Channel
• Suppose that the si(t) is
not any signal, but
specifically the signal at
the receiver side, defined
in accordance with an
AWGN channel:
• So the output of the
correlator (Fig. 5.3b) can
be defined as:
47/45
Chapter 5: Signal Space Analysis
deterministic quantity random quantity
contributed by the sample value of the
transmitted signal si(t) variable Wi due to noise
48/45
Chapter 5: Signal Space Analysis
Now,
• Consider a random
process X1(t), with x1(t), a
sample function which is
related to the received
signal x(t) as follows:
• Using 5.28, 5.29 and 5.30
and the expansion 5.5 we
get:
which means that the sample function x1(t) depends only on
the channel noise!
49/45
Chapter 5: Signal Space Analysis
• The received signal can
be expressed as:
NOTE: This is an expansion similar to the one
in 5.5 but it is random, due to the additive
noise.
50/45
Chapter 5: Signal Space Analysis
Statistical Characterization
• The received signal (output of the correlator of Fig.5.3b) is a random
signal.
• To describe it we need to use statistical methods – mean and variance.
• The assumptions are:
– X(t) denotes a random process, a sample function of which is represented by the
received signal x(t).
– Xj denotes a random variable whose sample value is represented by the correlator
output xj, j = 1, 2, …N.
– We have assumed AWGN, so the noise is Gaussian, so X(t) is a Gaussian process
and being a Gaussian RV, Xj is described fully by its mean value and variance.
51/45
Chapter 5: Signal Space Analysis
Mean Value
• Let Wj, denote a random variable, represented by its
sample value wj, produced by the jth correlator in
response to the Gaussian noise component w(t).
• So it has zero mean (by definition of the AWGN
model)
• …then the mean of
Xj depends only on
sij:
52/45
Chapter 5: Signal Space Analysis
Variance
• Starting from the definition,
we substitute using 5.29 and
5.31
Autocorrelation function of
the noise process
53/45
Chapter 5: Signal Space Analysis
• It can be expressed as:
(because the noise is
stationary and with a
constant power spectral
density)
• After substitution for
the variance we get:
• And since φj(t) has unit
energy for the variance
we finally have:
• Correlator outputs, denoted by Xj have variance
equal to the power spectral density N0/2 of the noise
process W(t).
54/45
Chapter 5: Signal Space Analysis
Properties (without proof)
• Xj are mutually uncorrelated
• Xj are statistically independent (follows from above
because Xj are Gaussian)
• and for a memoryless channel the following equation
is true:
55/45
Chapter 5: Signal Space Analysis
• Define (construct) a vector X of N random variables, X1, X2,
…XN,
• whose elements are independent Gaussian RV with mean
values sij, (output of the correlator, deterministic part of the
signal defined by the signal transmitted)
• and variance equal to N0/2 (output of the correlator, random
part, calculated noise added by the channel).
• then the X1, X2, …XN , elements of X are statistically
independent.
• So, we can express the conditional probability of X, given si(t)
(correspondingly symbol mi) as a product of the conditional
density functions (fx) of its individual elements fxj.
NOTE: This is equal to finding an expression of the probability
of a received symbol given a specific symbol was sent,
assuming a memoryless channel
56/45
Chapter 5: Signal Space Analysis
• …that is:
• where, the vector x and the scalar xj, are sample
values of the random vector X and the random
variable Xj.
57/45
Chapter 5: Signal Space Analysis
Vector x is called
observation vector
Vector x and scalar xj Scalar xj is called
are sample values of observable element
the random vector X
and the random
variable Xj
58/45
Chapter 5: Signal Space Analysis
• Since, each Xj is Gaussian with mean sj and variance
N0/2
• we can substitute in 5.44 to get 5.46:
Chapter 5: Signal Space Analysis
• If we go back to the formulation of the received
signal through a AWGN channel 5.34
Only projections of the noise onto
the basis functions of the signal set
The vector that we
{si(t)Mi=1 affect the significant
have constructed fully statistics of the detection problem
defines this part
60/45
Chapter 5: Signal Space Analysis
Finally,
• The AWGN channel, is equivalent to an
N-dimensional vector channel, described by the
observation vector
61/45
Chapter 5: Signal Space Analysis
Likelihood Functions
• The conditional probability density functions
fX(x|mi), I = 1, 2, 3, …M are the very
characterization of the AWGN channel.
• They express the functional dependence of the
observation vector x on the transmitted
message symbol mi. (known as the transmitted
message symbol)
62/26
62/26
Chapter 5: Signal Space Analysis
However,
• If we have the observation vector given, and we
want to define the transmitted message signal, then
we have the reverse situation
• We introduce the “likelihood function” L(mi) as:
• Or log likelihood function ..l(mi) as:
63/26
63/26
Chapter 5: Signal Space Analysis
Log-Likelihood Function of AWGN
Channel
Vector presentation of the
• Substitute 5.46 into 5.50: AWGN channel
• where sij, j = 1, 2, 3, ..N are the elements of the signal
vector si, representing the message symbol mi.
64/26
64/26
Chapter 5: Signal Space Analysis
So,
which is the log likelihood function of the AWGN channel..
65/26
65/26
Chapter 5: Signal Space Analysis
5.5 Maximum Likelihood Decoding
• Defining the problem
– Suppose that in each time slot duration of T seconds, one of
M possible signals, s1(t), s2(t), …sM(t) is transmitted with
equal probability, 1/M.
– the vector representation, the signal si(t), i=1, 2, …M is
applied to a bank of correlators, with a common input and
supplied with a suitable set of N orthogonal basis functions,
N. The resulting output defines the signal vector si.
– We represent each signal si(t) as a point in the Euclidian
space, N ≤ M (referred to as transmitted signal point or
message point).The set of message points corresponding to
the set of transmitted signals si(t) {i = 1 to M} is called
signal constellation.
66/26
66/26
Chapter 5: Signal Space Analysis
Figure 5.3
(a) Synthesizer for generating the signal si(t). (b) Analyzer for
generating the set of signal vectors {si}.
67/26
67/26
Chapter 5: Signal Space Analysis
– The received signal x(t) is applied to a bank of N
correlators (Fig. 5.3b) and the correlator outputs
define the observation vector x.
– On the receiving side the representation of the
received signal x(t) is complicated by the additive
noise w(t).
– As we discussed the previous class, the vector x
differs from the vector si by the noise vector w.
– However only the portion of it which interferes
with the detection process is of importance to us,
and this is fully described by w(t).
68/26
68/26
Chapter 5: Signal Space Analysis
• Based on the observation vector x we may represent
the received signal signal x(t) by a point in the same
Euclidian space used to represent the transmitted
signal.
69/26
69/26
Chapter 5: Signal Space Analysis
70/26
Chapter 5: Signal Space Analysis
Maximum a posteriori probability (MAP Rule)
So the optimum decision rule is:
Where k=1,2,…M
71/26
Chapter 5: Signal Space Analysis
• The same rule can be more explicitly expressed
using the a priori probabilities of the transmitted
signals as:
Conditional pdf of
observation vector X
given mk was transmitted
Unconditional pdf
of observation
vector X
a priori probability
of transmitting mk
72/26
72/26
Chapter 5: Signal Space Analysis
• Thus we can conclude, according to the definition of
likelihood functions, the likelihood function l(mk) will be
maximum for k = i.
• So the decision rule using the likelihood function will be
formulated as:
Maximum Likelihood rule
• For a graphical representation of the maximum likelihood rule
we introduce the following:
– Observation space – Z, N-dimensional, consisting of all possible
observation vectors x
– Z is partitioned into M decision regions, Z1, Z2, .. ZM
73/26
73/26
Chapter 5: Signal Space Analysis
Figure 5.8
Illustrating the
partitioning of the
observation space
into decision regions
for the case when
N = 2 and M = 4;
it is assumed that the
M transmitted
symbols are equally
likely.
74/26
74/26
Chapter 5: Signal Space Analysis
For the AWGN channel..
• Based on the log-likelihood
function, of the AWGN channel,
l(mk) will be max when the term:
is minimized by k = i.
• Decision rule for
AWGN:
• Or using Euclidian
space notation
75/26
75/26
Chapter 5: Signal Space Analysis
Finally,
• (5.59) states that the maximum likelihood decision rule is
simply to choose the message point closest to the received
signal point.
• After few re-organizations we get: (left as homework brain
gymnastic exercise for you)
Energy of the
transmitted signal
sk(t) 76/26
76/26
Correlation Receiver
77
Correlation Receiver
78
Correlation Receiver
79
Chapter 5: Signal Space Analysis
5.7 Probability of Error
• To complete the statistical characterization of the
correlation receiver (Fig. 5.9) we need to discuss its
noise performance.
• Using the assumptions made before, we can define the
average probability of error Pe as:
80/26
Chapter 5: Signal Space Analysis
• Using the likelihood function this can be re-written
as:
81/26
Chapter 5: Signal Space Analysis
Correlation Receiver
82
Matched Filter
83
Matched Filter
84
85
86
Matched Filter
87
Matched Filter
88
Matched Filter
89
Matched Filter
90
Matched Filter
91
Matched Filter
92
93
94
95
96
Properties of Matched Filter
97
Properties of Matched Filter
98
Properties of Matched Filter
99
Properties of Matched Filter
100
101
Properties of Matched Filter
102
Equalization
•
103
The Zero-forcing Equalizer
•
104
The Zero-forcing Equalizer
105
The Zero-forcing Equalizer
106
The Zero-forcing Equalizer
107
The Zero-forcing Equalizer
108
The Zero-forcing Equalizer
109
The Zero-forcing Equalizer
110
The Zero-forcing Equalizer
111
The Zero-forcing Equalizer
112
Error rate due to noise
113
Error rate due to noise
114
Error rate due to noise
115
Error rate due to noise
116
Error rate due to noise
117
Error rate due to noise
118
Error rate due to noise
119
Error rate due to noise
120
Error rate due to noise
121
122
Error rate due to noise
123
Error rate due to noise
124
Error rate due to noise
125
Error rate due to noise
126
Error rate due to noise
127
Error rate due to noise
128
Channel Capacity Theorem
129
130