Iare DC PPT PDF
Iare DC PPT PDF
Communications
ECE-III-II Sem
R15
Communication
• Main purpose of communication is to transfer information from a
source to a recipient via a channel or medium.
Recipient
Brief Description
• Source: analog or digital
• Transmitter: transducer, amplifier, modulator, oscillator, power amp.,
antenna
• Channel: e.g. cable, optical fibre, free space
• Receiver: antenna, amplifier, demodulator, oscillator, power amplifier,
transducer
• Recipient: e.g. person, (loud) speaker, computer
• Types of information
Voice, data, video, music, email etc.
• Information Source
– Discrete output values e.g. Keyboard
– Analog signal source e.g. output of a microphone
• Character
– Member of an alphanumeric/symbol (A to Z, 0 to 9)
– Characters can be mapped into a sequence of binary digits using
one of the standardized codes such as
• ASCII: American Standard Code for Information Interchange
• EBCDIC: Extended Binary Coded Decimal Interchange Code
Digital Signal Nomenclature
• Digital Message
– Messages constructed from a finite number of symbols; e.g., printed language
consists of 26 letters, 10 numbers, “space” and several punctuation marks.
Hence a text is a digital message constructed from about 50 symbols
– Morse-coded telegraph message is a digital message constructed from two
symbols “Mark” and “Space”
• M - ary
– A digital message constructed with M symbols
• Digital Waveform
– Current or voltage waveform that represents a digital symbol
• Bit Rate
– Actual rate at which information is transmitted per second
Digital Signal Nomenclature
• Baud Rate
– Refers to the rate at which the signaling elements are
transmitted, i.e. number of signaling elements per second.
25
25
Pulse Code Modulation(PCM)
2. Quantization: The process of dividing
the maximum value of the analog signal
into a fixed no. of levels in order to
convert the PAM into a Binary Code.
The levels obtained are called
“quanization levels”.
26
26
V Sampling,
o Quantization and
l Coding
t
a
g
e Time
7 111
L 110
B
6 C
e 5 101 i
v n o
4 100
e 3 011 a d
l 2 010 r e
s 1 001 s
Time 000
y
0
V
o 01010111011111010101 0
l
t
a
g
e 27
Time 27
Pulse Code Modulation(PCM)
28
28
Quantization
• By quantizing the PAM pulse, original signal is only
approximated
• The process of converting analog signals to PCM is
called quantizing
• Since the original signal can have an infinite number
of signal levels, the quantizing process will produce
errors called quantizing errors or quantizing noise
29
29
Quantization
31
31
Quantization
32
32
Quantization
33
33
Quantization and encoding of a
sampled signal
34
34
Quantization Error
When a signal is quantized, we introduce an error -
the coded signal is an approximation of the actual
amplitude value.
The difference between actual and coded value
(midpoint) is referred to as the quantization error.
The more zones, the smaller which results in
smaller errors.
BUT, the more zones the more bits required to
encode the samples -> higher bit rate
35
35
Quantization Error (cont.)
• Round-off error
• Overload error
Overload
36
36
Quantization Noise
38
38
Delta Modulation
In Delta Modulation, only one bit is transmitted per
sample
That bit is a one if the current sample is more positive
than the previous sample, and a zero if it is more
negative
Since so little information is transmitted, delta
modulation requires higher sampling rates than PCM
for equal quality of reproduction
39
Delta Modulation
This scheme sends only the difference between
pulses, if the pulse at time tn+1 is higher in amplitude
value than the pulse at time tn, then a single bit, say a
“1”, is used to indicate the positive value.
If the pulse is lower in value, resulting in a negative
value, a “0” is used.
This scheme works well for small changes in signal
values between samples.
If changes in amplitude are large, this will result in
large errors.
40
40
Delta Modulation
42
42
Delta Modulation
44
44
Delta Modulation
45
45
Delta Modulation
• Distortions in DM system
Granular noise occurs when step size▲ is large relative
to local slope m(t).
There is a further modification in this system,in which
step size is not fixed.
That scheme is known as Adaptive Delta Modulation.
46
46
Adaptive Delta Modulation
• A better performance can be achieved if
the value of ▲ is not fixed.
• The value of ▲ changes according to the
amplitude of the analog signal.
• It has wide dynamic range due to variable
step size.
• Also better utilisation of bandwidth as
compared to delta modulation.
• Improvement in signal to noise ratio.
47
47
Adaptive Delta Modulation
48
48
Unit 2
Digital Band-Pass Modulation Techniques
• Digital band-pass modulation techniques
– Amplitude-shift keying
– Phase-shift keying
– Frequency-shift keying
• Receivers
– Coherent detection
• The receiver is synchronized to the transmitter with respect to carrier phases
– Noncoherent detection
• The practical advantage of reduced complexity but at the cost of degraded
performance
50
Some Preliminaries
51
– Decreasing the bit duration Tb has the effect of increasing the transmission
bandwidth requirement of a binary modulated wave.
2
Ac
Tb
2
c ( t) cos( 2 f t ) (7 .3 )
c c
Tb
52
• Band-Pass Assumption
– The spectrum of a digital modulated wave is centered on the carrier
frequency fc
2
s ( t) b (t ) cos( 2 f t ) (7 .5 )
c
Tb
– The transmitted signal energy per bit
Tb
E
2
s (t ) dt
b
0
2 Tb
2
cos 2 ( 2 f ct ) dt
Tb 0
b (t ) (7 .6 )
53
1
cos ( 2 f c t ) [1 cos( 4 f tc)]
2
2
1 T
1 T
Eb cos 2 ( 4 f t ) dt
2 2
b (t ) dt
b b
b (t ) c
(7 .7 )
Tb 0
Tb 0
2
b (t ) c
0
1 T
Eb
2
b
b (t ) dt (7 .8 )
Tb 0
54
7.2 Binary Amplitude-Shift Keying
– The ON-OFF signaling variety
E , for binary symbol 1
b(t)
b
(7 .9 )
0, for binary symbol 0
2E
b
cos( 2 f t c), for symbol 1
s(t) Tb (7 .10 )
0, for symbol 0
– The average transmitted signal energy is ( the two binary symbols must by
equiprobable)
Eb
E av (7 .11 )
2
55
• Generation and Detection of ASK Signals
– Generation of ASK signal : by using a produce modulator with two inputs
• The ON-OFF signal of Eq. (7.9)
E , for binary symbol 1
b(t)
b
(7 .9 )
0, for binary symbol 0
2
c (t ) cos( 2 f tc )
Tb
56
57
• Spectral Analysis of BASK
– The objective
1) To investigate the effect of varying the carrier frequency fc on the power
spectrum of the BASK signal s(t), assuming that the wave is fixed.
2) Recall that the power spectrum of a signal is defined as 10 times
the logarithm of the squared magnitude spectrum of the signal
3) To investigate the effect of varying the frequency of the square wave on
the spectrum of the BASK signal, assuming that the sinusoidal carrier
wave is fixed.
58
59
1. The spectrum of the BASK signal contains a line
component at f=fc
2. When the square wave is fixed and the carrier
frequency is doubled, the mid-band frequency of the
BASK signal is likewise doubled.
3. When the carrier is fixed and the bit duration is halved,
the width of the main lobe of the sinc function
defining the envelope of the BASK spectrum is
doubled, which, in turn, means that the transmission
bandwidth of the BASK signal is doubled.
4. The transmission bandwidth of BASK, measured in
terms of the width of the main lobe of its spectrum, is
equal to 2/Tb, where Tb is the bit duration.
60
Phase-Shift Keying
• Binary Phase-Shift Keying (BPSK)
– The special case of double-sideband suppressed-carried (DSB-SC)
modulation
– The pair of signals used to represent symbols 1 and 0,
2E b
cos( 2 f t),
c
for symbol 1 correspond ing to i 1
Tb
s i( t ) ( 7.12 )
2E 2E
b
cos( 2 f t c ) b
cos( 2 f t),c for symbol 0 correspond ing to i 2
Tb Tb
– An antipodal signals
• A pair of sinusoidal wave, which differ only in a relative phase-shift of π
radians.
61
• Generation and Coherent Detection of BPSK
Signals
1. Generation
– A product modulator consisting of two component
1) Non-return-to-zero level encoder
• The input binary data sequence is encoded in polar form with symbols 1
and 0 represented by the constant-amplitude levels ; √Eb and - √Eb,
2) Product modulator
• Multiplies the level-encoded binary wave by the sinusoidal carrier c(t) of
amplitude √2/Tb to produce the BPSK signal
62
2. Detection
– A receiver that consists of four sections
1) Product modulator; supplied with a locally generated reference signal
that is a replica of the carrier wave c(t)
2) Low-pass filter; designed to remove the double-frequency components of
the product modulator output
3) Sampler ; uniformly samples the output of the low-pass filter, the local
clock governing the operation of the sampler is synchronized with the
clock responsible for bit-timing in the transmitter.
4) Decision-making device ; compares the sampled value of the low-pass
filter’s output to an externally supplied threshold. If the threshold is
exceed, the device decides in favor of symbol 1, otherwise, it decides in
favor of symbol 0.
– What should the bandwidth of the filter be ?
• The bandwidth of the low-pass filter in the coherent BPSK receiver has to
be equal to or greater than the reciprocal of the bit duration Tb for
satisfactory operation of the receiver.
63
64
• Spectral Analysis of BPSK
– The objectives
1. To evaluate the effect of varying the carrier frequency fc on the power
spectrum of the BPSK signal, for a fixed square modulating wave.
2. To evaluate the effect of varying modulation frequency on the power
spectrum of the BPSK signal, for a fixed carrier frequency.
65
66
1. BASK and BPSK signals occupy the same transmission
bandwidth, which defines the width of the main lobe
of the sinc-shaped power spectra.
2. The BASK spectrum includes a carrier component,
whereas this component is absent from the BPSK
spectrum. With this observation we are merely
restating the fact that BASK is an example of
amplitude modulation, whereas BPSK is an example of
double sideband-suppressed carrier modulation
• The present of carrier in the BASK spectrum means that the binary data
stream can be recovered by envelope detection of the BASK signal.
• On the other hand, suppression of the carrier in the BPSK spectrum mandates
the use of coherent detection for recovery of the binary data stream form the
BASK signal
67
• Quadriphase-Shift Keying
– An important goal of digital communication is the
efficient utilization of channel bandwidth
– In QPSK (Quadriphase-shift keying)
• The phase of the sinusoidal carrier takes on one of the four equally
spaced value2s,Esuch as π/4, 3π/4, 5π/4, and 7π/4
cos 2 fc t ( 2 i 1) , 0 t T
s i( t ) (7 .13 )
T 4
0 , elsewhere
• Each one of the four equally spaced phase values corresponds to a unique
pair of bits called dibit T 2T ( 7.14 )
b
2E 2E
s(t) cos ( 2 i 1) cos( 2 f c t ) sin ( 2 i 1) sin( 2 f c t ) (7 .15 )
i
T 4 T 4
68
1. In reality, the QPSK signal consists of the sum of two BPSK signals
2. One BPSK signal, represented by the first term defined the product of
modulating a binary wave by the sinusoidal carrier
this binary wave has an amplitude equal to ±√E/2
2 E / T cos ( 2 i 1) cos( 2 f c t ),
4
E /2 for i 1 , 4
E cos ( 2i 1) (7 .16 )
4 E /2
for i 2 , 3
E /2 for i 1 , 2
E sin ( 2i 1) ( 7.17 )
4 for i 3 , 4
E /2
69
70
• Generation and Coherent Detection of QPSK
Signals
1. Generation
– The incoming binary data stream is first converted into
polar form by a non-return-to-zero level encoder
– The resulting binary wave is next divided by means of
a demultiplexer into two separate binary waves
consisting of the odd- and even- mumbered input bits
of b(t) – these are referred to as the demultiplexed
components of the input binary wave.
– The two BPSK signals are subtracted to produce the
desired QPSK signals
71
72
2. Detection
– The QPSK receiver consists of an In-phase and quadrature with a
common input.
– Each channel is made up of a product modulator, low-pass filter,
sampler, and decision-making device.
– The I- and Q-channles of the receiver, recover the demultiplexed
components a1(t) and a2(t)
– By applying the outputs of these two channels to a multiplexer,
the receiver recovers the original binary sequence
73
74
• Offset Quadriphase-Shift Keying (OQPSK)
– The extent of amplitude fluctuations exhibited by
QPSK signals may be reduced by using a variant of
quadriphase-shift keying
– The demultiplexed binary wave labeled a2(t) is
delayed by one bit duration with respect to the other
demultiplexed binary wave labled a1(t)
– ±90◦ phase transitions occur twice as frequency but
with a reduced range of amplitude fluctuations.
– Amplitude fluctuations in OQPSK due to filtering have
a smaller amplitude than in QPSK.
75
76
77
• Computer Experiment III : QPSK and OPQSK Spectra
– QPSK Spectra
Carrier Frequency, f c 8 Hz
1s for part (a) of the figure
Bit duration, Tb
0 s for part (b) of the figure
– OQPSK Spectra
• For the same parameters used for QPSK
78
Frequency-Shift Keying
• Binary Frequency-Shift Keying (BFSK)
– Each symbols are distinguished from each other by transmitting one of two
sinusoidal waves that differ in frequency by a fixed amount
2E b
cos( 2 f t),
1
for symbol 1 correspond ing to i 1
Tb
s i( t ) (7 .18 )
2E
b
cos( 2 f t), for symbol 0 correspond ing to i 2
2
Tb
– Sunde’s BFSK
• When the frequencies f1 and f2 are chosen in such a way that they differ from
each other by an amount equal to the reciprocal of the bit duration Tb
79
• Computer Experiment IV : Sunde’s BFSK
1. Waveform
– Input binary sequence 0011011001 for a bit duration Tb=1s
– The latter part of the figure clearly displays the phase-
continuous property of Sunde’s BFSK
Bit duration, Tb 1s
2. Spectrum
Carrier frequency, f c 8 Hz
1. The spectrum contains two line components at the frequency f=fc±1(2Tb); which equal
7.5Hz and 8.5Hz for fc=8 Hz and Tb=1s
2. The main lobe occupies a band of width equal to (3/Tb)=3Hz, centered on the carrier
frequency fc=8 Hz
3. The largest sidelobe is about 21 dB below the main lobe.
80
81
82
• Continuous-phase Frequency-Shift Keying
– The modulated wave maintains phase continuity at all
transition points, even though at those points in time
the incoming binary data stream switches back and
forth
– Sunde’s BFSK, the overall excursion δf in the
transmitted frequency from symbol 0 to symbol 1, is
equal to the bit rate of the incoming data stream.
– MSK (Minimum Shift Keying)
• The special form of CPFSK
• Uses a different value for the frequency excursion δf , with the result that this
new modulated wave offers superior spectral properties to Sunde’s BFSK.
83
• Minimum-Shift Keying
– Overall frequency excursion δf from binary symbol 1 to symbol 0, is one half
the bit rate f f f 1 2
1
( 7.19 )
2T b
1
fc ( f1 f 2 ) ( 7.20 )
2
f
f1 f c , for symbol 1 ( 7 .21 )
2
f
f2 fc , for symbol 0 (7 .22 )
2
2E b
s (t ) cos[ 2 f t c ( t )] (7 .23 )
Tb
84
– Sunde’s BFSK has no memory; in other words, knowing which particular
change occurred in the previous bit interval provides no help in the current
bit interval.
f
(t)2 t
2
t
, for symbol 1 ( 7.24 )
2T b
f
( t ) 2 t
2
t
, for symbol 0 (7 .25 )
2T b
85
86
87
88
• Formulation of Minimum-Shift Keying
2Eb 2E b
s (t ) cos( ( t )) cos( 2 f t )c sin( ( t )) sin( 2 f t) c ( 7.26 )
Tb Tb
2 / T cos( 2 f t ). ( 7.27 )
b c
s Q (t )
s ( t ) a ( t ) cos( 2 f t )
1
I 1 0
( 7.29 ) ( t ) tan s(t)
I
a 2 (t )
tan( 2 f t0 )
1
s (t ) a (t ) sin( 2 f t ) tan (7 .31 )
(7 .30 ) a ( t)
1
Q 2 0
89
1. a2(t)=a1(t)
This scenario arises when two successive binary symbols in the incoming
data stream are the same
( t ) tan 1
[tan( 2 f t )]
0
2 ft ( 7.32 )
0
2. a2(t)=-a1(t)
This second scenario arises when two successive binary symbols in the
incoming data stream are different
1
f0 (7 .34 )
4T b
90
– Given a non-return-to-zero level encoded binary
wave b(t) of prescribed bit duration Tb and a
sinusoidal carrier wave of frequency fc, we may
formulate the MSK signal by proceeding as follows
1. Use the given binary wave b(t) to construct the binary demultiplexed-
offset waves a1(t) and a2(t)
2. Use Eq. (7.34) to determine the frequency f0
3. Use Eq. (7.29) and (7.30) to determine the in-phase component sI(t) and
quadrature component sQ(t), respectively from which the MSK signal s(t)
follows
91
• Computer Experinment V : MSK Spectrum
– The parameters
Bit duration, T b 1s
Carrier frequency, f c 8 Hz
92
93
– Although the carrier frequency is not high enough
to completely eliminate spectral overlap, the
overlap is relatively small as evidenced by
• The small value of the spectrum at zero frequency
• The small degree of asymmetry about the carrier frequency fc=8Hz
94
Summary of Three Binary Signaling Schemes
95
96
Non-coherent Digital Modulations Schemes
– Both BASK and BPSK are examples of linear modulation, with increasing
complexity in going from BASK and BPSK.
– BFSK is in general an example of nonlinear modulation
99
100
• Differential Phase-Shift Keying
– In the case of phase-shift keying, we cannot have noncoherent
detection in the traditional sense because the term “noncoherent”
means having t to without carrier-phase information
– We employ a “pseudo PSK” technique (differential phase-shift keying)
– DPSK eliminates the need for a coherent reference signal at the
receiver by combination two basic operations at the transmitter
• Differential encoding of the input binary wave
• Phase-shift keying
– The receiver is equipped with a storage capability designed to
measure the relative phase difference between the waveforms
received during two successive bit intervals.
– The phase difference between waveforms received in two succssive bit
intervals will be essentially independent of θ.
101
1. Generation
– The differential encoding process at the transmitter input starts
with an arbitrary first bit, serving merely as reference
• If the incoming binary symbol bk is 1, then the symbol dk is unchanged with respect to the
previous symbol dk-1
• If the incoming binary symbol bk is 0, then the symbol dk is changed with respect to the
previous symbol dk-1
2. Detection
– The phase-modulated pulses pertaining to two successive bits
are identical except for a possible sign reversal
– The incoming pulse is multiplied by the preceding pulse
– The preceding pulse serves the purpose of a locally generated
reference signal
– Applying the sampled output of the low-pass filter to a decision-
making device supplied with a prescribed threshold, detection
of the DPSK signal is accomplished.
102
103
104
M-ary Digital Modulation Schemes
105
– The discrete coefficients are respectively referred
to as the in-phase and quadrature components of
the M-ary PSK singal
2E 2 i 0,1,..., M 1
s(t) cos 2 f c t i , (7 .35 )
i
T M 0 t T
s ( t ) E cos 2 i 2
cos( 2 f tc )
i M
T
i 0,1,..., M 1
2 sin( 2 f t ) ,
2
E sin i ( 7 .36)
c
M T 0tT
1/2
2
2
2
2
E cos i E sin i E, for all i ( 7 .37 )
M M
106
• Signal-Space Diagram
– Pair of orthogonal functions
2
1( t ) cos( 2 f tc ), 0t T (7 .38 )
T
2
2 ( t) sin( 2 f ct ), 0t T (7 .39 )
T
107
108
• M-ary Quadrature Amplitude Modulation (QAM)
– The mathematical description of the new modulated
signal
2E 2E 0 i 0 ,1,..., M 1
s i( t ) 0
a i cos( 2 fct ) b isin( 2 f t),
c
( 7.40 )
T T 0 t T
( Ea 2 Eb 2 ) 1 / 2 E, for all i
i i
109
• Signal-Space Diagram
– the signal-space representation of M-ary QAM for
M=16
– Unlike M-ary PSK, the different signal points of M-
ary QAM are characterized by different energy
levels
– Each signal point in the constellation corresponds
to a specific quadbit
110
111
• M-ary Frequency-Shift Keying
– In one form of M-ary FSK, the transmitted signals are defined for some
fixed integer n as
2E i 0,1,..., M 1
s(t) cos ( n i ) t , (7 .41 )
T
i
T 0 t T
– Like M-ary PSK, the envelope of M-ary FSK is constant for all M
T E for i j
0
s i( t ) s j( t )dt
0 for i j
( 7.42 )
– Signal-Space Diagram
• Unlike M-ary PSK and M-ary QAM, M-ary FSK is described by an M-
dimensional signal-space diagram
1 i 0 ,1,..., M 1
i( t ) s i( t ) (7 .43 )
E 0 t T
112
1. Correlating the signal
2E b
s 1 (t ) cos( 2 f t c) for symbol 1 (7 .45 )
Tb
Tb
s (t ) s (t ) dt
1 1 1
0
Tb
2
E cos 2 ( 2 f t )dt
0 Tb
b c
( 7.46 )
s E (7 .47 )
1 b
113
114
– As with BPSK, the signal-space diagram consists of
two transmitted signal points
Eb 0
s1 (7 .50 ) s2 ( 7 .51 )
E b
0
– Fig. 7.23 and 7.24 differ in one important respect :
dimensionality
2 2
1( t ) cos( 2 f 1t ) ( 7.52 ) 2 ( t ) cos( 2 f t2 ) (7 .53 )
Tb Tb
115
116
Digital Baseband Transmission
• Why to apply digital transmission?
• Symbols and bits
• Baseband transmission
– Binary error probabilities in baseband transmission
• Pulse shaping
– minimizing ISI and making bandwidth adaptation - cos roll-
off signaling
– maximizing SNR at the instant of sampling - matched
filtering
– optimal terminal filters
• Determination of transmission bandwidth as a function
of pulse shape
– Spectral density of Pulse Amplitude Modulation (PAM)
• Equalization - removing residual ISI - eye diagram
Why to Apply Digital Transmission?
• Digital communication withstands channel noise, interference
and distortion better than analog system. For instance in PSTN
inter-exchange STP*-links NEXT (Near-End Cross-Talk) produces
several interference. For analog systems interference must be
below 50 dB whereas in digital system 20 dB is enough. With
this respect digital systems can utilize lower quality cabling than
analog systems
• Regenerative repeaters are efficient. Note that cleaning of
analog-signals by repeaters does not work as well
• Digital HW/SW implementation is straightforward
• Circuits can be easily reconfigured and preprogrammed by DSP
techniques (an application: software radio)
• Digital signals can be coded to yield very low error rates
• Digital communication enables efficient exchanging of SNR to
BW-> easy adaptation into different channels
• The cost of digital HW continues to halve every two or three
years
1 1 0 1 1 0 0 0 1 1 1 1
s (t )
M 2
n
t
D
Unipolar PAM
y ( t ) a k p ( t t d k D ) n ( t) decision instances
k
y (t K ) a k a k p ( K D k D ) n ( t )
kK
p (y | H ) p (y)
Y 0 N
and
H : a 1, Y A n
1 k
p ( y | H ) p ( y A)
Y 1 N
p (y | H ) p (y)
Y 0 N
H : a 1, Y A n
1 k
p (y | H ) p (yA)
Y 1 N
p eo
P (Y V | H )
0 V
p (y|H
Y 0
)d y
p (x)
1 x
e x p 2
2
N
2
Determining Error Rate
p e0
p ( y )dy
V N
2
1 x
dx
V e x p
p
e
2 2
0 2
V
2
1 x
p dx Q
V e xp
2 2
e0 2
and therefore
V A V
pe0 Q and also P
V
p N ( y A )d y Q
e1
Baseband Binary Error Rate
in Terms of Pulse Shape and
setting V=A/2 yields then
A
pe 1
2 ( p e0 p e1 ) p p pe Q
2
e0 e1
x A /2
2 2 2
S R DC
2 R
S / N , p o la r
R R
E / N S / N r 2 N r /( 2 N r ) , u n ip o la r
b b 0 R 0b b 0 b 0 b b
N N r /2
R 2 N r / N r 2 , p o la r
0 b
b 0 b 0 b b
Note that N R
N 0 B N N r / 2 (lower limit with sinc-pulses (see later))
0 b
Pulse Shaping and Band-limited
Transmission
1 f c o s 2 t
( f / 2 r)
2
P( f) r/ 2
cos p ( t) s in c r t
r 2r 1(4t)
2
Example
By using r/2 and polar signaling, the following waveform
is obtained:
e
Q ( 2 ), p o la r [ 1 ]
b
Matched Filtering
G (f)
x ( t ) A p ( t t ) n
R R 0
y (t )
X ( f ) A P ( f ) e x p ( j t )
x R (t ) D
R R 0
+ H(f)
2
2
E X (f)
R R
df A
2
R
P( f ) df
A F [ H ( f) X ( f )]
1
R t t0 td
H(f )
2 2
G ( f )df
2
2
2
H ( f ) P ( f ) e x p j t df
A d
A
2
Should be maximized
R
2
H (f ) df
2
Matched Filtering SNR and Transfer
Function 2
Schwartz‟s inequality
V ( f )W * ( f )d f
applies when
2
W ( f ) df
2
V (f ) df V ( f ) KW * ( f )
2
2
H ( f ) P ( f ) e x p j t df SNR at the moment of
A d
A
2
sampling
R
2
H ( f ) df
2
V(f)H(f )
W * ( f ) P ( f ) e x p ( j t )
d
H ( f ) K P ( f ) e x p ( j t ) d
pulse
impulse response is: h ( t ) K p( t t ) energy
d
2
2
2
2 A P ( f ) df
A
2
Considering right 2A
W ( f )
2 R
R
df
R( f ) Nyqvist shaped
pulse in y(t)
T ( f)
The following condition must be fulfilled:
T ( f ) R *( f ), matchedfiltering
R(f ) T (f ) C( f )
N
0 , t 1 / a , 2 / a...
B a
f
• From the spectra we note that T
a a
and hence it must be that
for baseband
BT r
PAM Power Spectral Density (PSD)
• PSD for PAM can be determined by using a general expression
Amplitude autocorrelation
1
R a( n ) e x p ( j 2 n fD )
2
G (f)
x
P( f)
D n
DC power
m ,n 0
a
2
a
and therefore
R a( n ) e x p ( 2 n fD ) m e x p ( j 2 n fD )
2 2
n a
n n
1
n
on the other hand e x p ( j 2 n fD ) afnd r 1/ D
n
D n D
G ( f ) rP(f ) m r P (nr ) ( f n r )
2 2
2 2 2
x a a
n
Example
• For unipolar binary RZ signal:
1 f
P( f) s in c
2 rb 2 rb
• Assume source bits are equally alike and independent, thus
Tb / 2
(1 / 2 T ) A d t A / 4 , m
2 2 2 2 2
a b a a
0
2 2
A f A n
G x ( f ) ( f n rb ) s i n c
2 2
sin c
1 6r b 2 rb 16 n
2
2
1
2
A
r b
4 2 r b
G ( f ) r P ( f)
2
2
x a
m r P (nr ) ( f n r )
2
2 2
a
n
Equalization: Removing Residual ISI
• Consider a tapped delay line equalizer with
p(t)cp(t2ND)
N N
p (t) c p (t )
N N
p (t) c p ( t N D)
0 N
• Search for the tap gains cN such that the output equals zero at sample
intervals D except at the decision instant when it should be unity. The
output is (think for instance paths c-N, cN or c0)
N
p ( t ) c p ( t n D N D)
eq n
n N
N N
p (kDND)
eq
cn p ( k D n D ) cn p D ( k n )
n N n N
p (t )
Tapped Delay Line:p 0
p 1
p 2 n
Matrix Representation
c n
c n 1
c n
p (t )
eq
+
• At the instan t of decision: 1, k 0
p ( t ) c p D ( k n ) c p
N N
eq k n n
n N n N kn
0 , k 1, 2 , ..., N
p ... p c
0
• Thatp cleads p cinto (2N+1 ... p c
0 n
)x(2N+1)0 matri
1
n1
x wher e
2 n n
0 2 N N
...
(2N+1
p c ) tap
p c coe
1
.. f
n ficien
. p c ts
0
0can be
p
n 1 solv... ed:p 2 n 1 n
N 1
c N 1
0 1
p N ... p N
c 1 0
p c p c ... p c 1
n n n 1 n 1 n n
p N 1
... p N 1
c 1
0
p 2 n c n p 2 n 1 c n 1 ... p 0 c n 0 p ... p c 0
2N 0 N
Example of Equalization
p
• Read the distorted pulse values into
0
matrix1 .0from
0 .1
fig.
0 .0
(a)
c 0
p1
0 .1 c 1 p 2
1
0 .2 1 .0
0
p2
0 .1 0 .2 1 .0 c 0
1 p 1
c
1
0 .0 9 6
c 0 .9 6
0
c 0 .2
and the solut ion is 1
• Eye Pattern
– Be produced by the synchronized superposition of
successive symbol intervals of the distorted waveform
appearing at the output of the receive-filter prior to
thresholding
– From an experimental perspective, the eye pattern
offers two compelling virtues
• The simplicity of generation
• The provision of a great deal of insightful information about
the characteristics of the data transmission system, hence its
wide use as a visual indicator of how well or poorly a data
transmission system performs the task of transporting a data
sequence across a physical channel.
140
• Timing Features
– Three timing features pertaining to binary data
transmission system,
• Optimum sampling time : The width of the eye opening
defines the time interval over the distorted binary waveform
appearing at the output of the receive-filter
• Zero-crossing jitter : in the receive-filter output, there will
always be irregularities in the zero-crossings, which, give rise
to jitter and therefore non-optimum sampling times
• Timing sensitivity : This sensitivity is determined by the rate
at which the eye pattern is closed as the sampling time is
varied.
Fig. 6.5
141
Back Next
Fig.6.5
142
• The Peak Distortion for Intersymbol
Interference
– In the absence of channel noise, the eye opening
assumes two extreme values
• An eye opening of unity, which corresponds to zero
intersymbol interference
• An eye opening of zero, which corresponds to a
completely closed eye pattern; this second extreme
case occurs when the effect of intersymbol interference
is severe enough for some upper traces in the eye
pattern to cross with its lower traces.
Fig. 6.6
143
Back Next
Fig.6.6
144
– Noise margin
• In a noisy environment,
• The extent of eye opening at the optimum sampling time provides a measure of
the operating margin over additive channel noise
( Eye opening ) 1 D (6 .34 ) Fig. 6.7
peak
– Eye opening
• Plays an important role in assessing system performance
• Specifies the smallest possible noise margin
• Zero peak distortion , which occurs when the eye opening is unity
• Unity peak distortion, which occurs when the eye pattern is completely closed.
• The idealized signal component of the receive-filter output is defined by the first
term in Eq. (6.10)
• The intersymbol interference is defined by the second term
y
i
Ea
i
k
a p
ik
, i 0 , 1, 2 ,... ( 6.10 )
k
k i
145
Back Next
Fig.6.7
146
(Maximum ISI ) pi k
m
ki
D peak pik
k
p ( i k )T b ( 6.35 )
k
k i
147
6.7 Computer Experiment : Eye Diagrams for Binary and
Quanternary Systems
• Fig. 6.8(a) and 6.8(b) show the eye diagrams for a baseband PAM
transmission system using M=2 and M=4.
Fig. 6.8
• Fig. 6.9(a) and 6.9(b) show the eye diagrams for these two
baseband-pulse transmission systems using the same system
parameters as before, but this time under a bandwidth-limited
condition. 1
H ( f )
1(f/ f
2N
0
) Fig. 6.9
B 0 .5 (1 0 .5 ) 0 .75 Hz
T
148
Back Next
Fig.6.8
149
Back Next
Fig.6.9
150
Introduction
• What is information theory ?
– Information theory is needed to enable the
communication system to carry information (signals) from
sender to receiver over a communication channel
• it deals with mathematical modelling and analysis of a
communication system
• its major task is to answer to the questions of signal compression
and transfer rate
– Those answers can be found and solved by entropy and
channel capacity
Entropy
• Entropy is defined in terms of probabilistic
behaviour of a source of information
• In information theory the source output are
discrete random variables that have a certain
fixed finite alphabet with certain probabilities
– Entropy is an average information content for the
given source symbol K1
1
H ( ) p k log 2 ( )
k0 pk
Entropy (example)
• Entropy of Binary
Memoryless Source
• Binary memoryless
source has symbols
0 and 1 which have
probabilities p0 and
p1 (1-p0)
• Count the entropy
as a function of p0
Source Coding Theorem
• Source coding means an effective
representation of data generated by
a discrete source
– representation by source encoder
• statistics of the source must be
known (e.g. if coding priorities exist)
Data Compaction
• Data compaction (a.k.a lossless data
compression) means that we will remove
redundant information from the signal
prior the transmission
– basically this is achieved by assigning short
descriptions to the most frequent outcomes of
the source output and vice versa
• Source-coding schemes that are used in
data compaction are e.g. prefix coding,
huffman coding, lempel-ziv
Data Compaction example
• Prefix coding has an important feature
that it is always uniquely decodable and
it also satisfies Kraft-McMillan (see
formula 10.22 p. 624) inequality term
• Prefix codes can also be referred to as
instantaneous codes, meaning that the
decoding process is achieved
immediately
Data Compaction example
• In Huffman coding to
each symbol of a given
alphabet is assigned a
sequence of bits accor -
ding to the symbol
probability
Data Compaction example
Communication
Systems Error-Control 168
Coding
Classification of Error-Control Coding:
Systematic Code Nonsystematic Code
Linear code
Nonlinear code
Error-detecting code
Error-correcting code
Communication
Systems Error-Control 169
Coding
Block code
Convolution code
Code Rate:
k
r
n r 1
170
Hamming Distance and Hamming Weight
Hamming weight w(c): defined as the number of nonzero elements in the code vector
c.
10110011 w=5
10110011
d=3
10011010
171
Minimum Distance and Minimum
Weight
• The minimum weight wmin is defined as the smallest weight of all nonzero code
vectors in the code.
• The minimum distance dmin is defined as the smallest Hamming distance between
any pair of code vectors in the code.
1
t ( d m in 1 ) (1 0 .2 5 )
2
173
Figure 10.6
(a) Hamming distance d(ci, cj) 2t 1. (b) Hamming distance
d(ci, cj) 2t. The received vector is denoted by r.
174
10.3 Linear Block Codes
What is block code?
Message sequence
175
What is Systematic linear block codes?
K bits: 1 1 0 1
N bits: 0 0 1 1 0 1
1101 -- systematic code
The n-k bits are computed from message bits in accordance with a given encoding rule.
177
Relation between parity bits and message bits
The (n-k) parity bits are linear sums of the k message bits,
as shown by generalized relation
b i p 0 i m 0 p 1i m 1 p
k 1, i
m
k 1
(1 0 .2 )
1 if b depends on mj
i
p ij
0 o th e r w is e
178
Matrix Notation
1-by-k message vector m [m , m , , m k 1 ]
0 1
c [b m ]
179
bi p 0 i m 0 p1i m 1 p
k 1, i
m
k 1
(1 0 .2 )
k-by-(n-k)
p 00 p0 1 p 0 ,n k 1
p 10 p 11 p 1, n k 1
P
p k 1, 0 p k 1 ,1 p k 1 , n k 1
b mP Where pij is 0 or 1.
180
Because
therefore c [m P m ] m [ P I ]
k
G [P I ] (1 0 .1 2 )
k
c mG (1 0 .1 3 )
181
Why is G called generator matrix?
c mG
The full set of code words, referred to simply as code, is generated in accordance with
c=mG by setting the message vector m range through the set of all 2k binary k-tuples (1-by-k
vectors) 2k.
182
b i p 0 i m 0 p 1i m 1 p m (1 0 .2 )
k 1, i k 1
183
Parity-Check Matrix H
Let H denote an (n-k)-by-n matrix, defined as
H [I PT ]
nk
PT
Hence
HG T
[ I nk P T] P T
P T
0 Modulo-2
I arithmetic
k
That is
Systems
HG
Communication
Error-Control
T
0 184
Coding
We have known that
c mG
Postmultiplying both sides by HT, we get
mGH 0
T T
cH (1 0 .1 6 )
185
Generator equation
c mG (1 0 .1 3 )
0
Parity-check equation T
cH (1 0 .1 6 )
They are basic to the description and operation of a linear block code.
187
Example:
k 1 n 5 I k [1 ]
c 0 , c 1 , ...c 4 b 0 , b 1 , b 2 , b 3 , m 0
b0 1 m 0
b1 1 m 0 P [1 1 1 1 ] 1 0 0 0 1
0 1 0 0 1
b2 1 m H [I P ]
T
nk
0
0 0 1 0 1
b3 1 m G [1 1 1 1 1]
0 0 0 0 1 1
Because: m=0 or 1
Thus, c=mG=00000 or 11111
188
Error Pattern
transmitted code: c (1×n vector)
received code: r ( 1×n vector)
error pattern: e (1×n error vector)
Our task now is to decode the code vector c from the received vector r. How to do it?
189
Definition and properties of
Syndrome
s rH
Definition: T
(1 0 .1 9 )
HT is a n× (n-k) vector, so, s is a 1×(n-k) vector.
Property 1: it depends only on the error pattern, and not on the transmitted code word.
because:
s rH ( c e) H cH eH eH
T T T T T
190
Syndrome Decoding
The decoding procedure for a linear block code
② Identify the error pattern emax with the largest probability of occurrence.
③ Compute the code vector c = r + emax as the decoded version of the received vector
r.
191
Likelihood decoding:
e { e , e , ...e , ...e }
1 2 i n
s rH ( c e )H cH eH eH
T T T T T
s eH
T
e H T
e H T
e H T
e H T
1 1 2 2 i i n n
192
s eH
T
e H T
e H T
e H T
e H T
1 1 2 2 i i n n
s H sT H
T or
i i
193
Example 10.2 Hamming Codes
It is a special (n, k) linear block code that can correct 1-bit error. Speciality :
Block length:
n 2 1
m
194
(7, 4) Hamming Code
Generator matrix
1 1 0 1 0 0 0
0 1 1 0 1 0 0
G
1 1 1 0 0 1 0
1 0 1 0 0 0 1
p Ik
Parity-check matrix
1 0 0 1 0 1 1
H 0 1 0 1 1 1 0
0 0 1 0 1 1 1
Ink
195
pT
Suppose:received vector is [1100010]
Determine: is it correct?
Because
1 0 0
0 1 0
0 0 1
s 1 1 0 0 0 1 0 1
1 0
0 0 1
s 0
0 1 1
So, the received code vector is wrong.
1 1 1
Communication 1 0 1
Systems Error-Control
Coding 196
Cyclic Codes
• Cyclic code is a kind of linear block codes.
• Any cyclic shift of a code word in the code is also a code word.
• Cyclic codes possess a well-define mathematical structure, which has led to the
development of very efficient decoding schemes for them.
197
Cyclic Property
Assume that cis a ccode,word of an)(n,k) linear block code.
Code words:
Code polynomial:
c( X ) c c X c X 2
c X n 1
0 1 2 n1
c (i) ( X ) c c X
i -1
c X i
c X i 1
c X n -1
(1 0 .3 0 )
n-i n -1
0 1 n - i -1
199
X c( X ) c X c X 2
c X 3
c X n
0 1 2 n1
X ic ( X )
X i (c c X c X n - i -1
c X n -i
c X n-1
)
0 1 n - i -1 n-i n -1
c X i
c X i1 c X n -1
c X n
c X n i -1
0 1 n - i -1 n-i n -1
X c( X ) c c c Xi c X i 1
c
i i -1 n-1
X X
n-i n -1 0 1 n - i-1
c (X n
1) c X i -1
( Xn 1)
n-i n-1
c ( i ) ( X ) Communication
c c X
i -1
c X i
c X i 1
c X n -1
(1 0 .3 0 )
Systems n-i
Error-Control n -1
0 1 n - i -1
Coding 200
Relation between
X i c (X ) a n d c ( i ) ( X )
i (i) remainder
X c( X ) c (X )
q(X )
1 1
n n
X X
Quotient
q ( X ) c c X c X
i 1
ni n i1 n 1
X c ( X ) q ( X )( X 1) c
i n (i)
(X )
( X ) X c ( X ) m o d( X 1)
(i) i n
c (1 0 .3 3 )
Communication
Systems Error-Control 201
Coding
Generator Polynomial
Let g(X) be a polynomial of degree n-k that is a factor
of Xn+1. It may be expanded as:
nk 1
Generator
nk
g(x) 1 X g i 0 or 1
i
gi X ,
i 1
polynomial
c( X ) a( X )g( X ) (1 0 .3 5 )
202
Encoding Procedure
b( X ) b b X b X n k 1
0 1 n k 1
Therefore, code polynomial
nk
Systems
X ) b( X ) X
c Error-Control
(Communication 203
m( X ) ( 1 0 .3 8 )
Coding
How to determine b(X)?
Because c ( X ) a ( X ) g (X ) (1 0 .3 5 )
nk
c(X ) b(X ) X m (X ) ( 1 0 .3 8 )
Thus, nk
a(X)g(X) b(X) X m(X)
nk
X m(X ) b( X )
a(X ) (1 0 .3 9 )
g(X ) g(X )
It states that the polynomial b(X) is the remainder left over after dividing xn-k m(X) by g(X).
204
Steps for encoding an (n,k)
systematic cyclic code
205
How to select generator polynomial?
Rule: Any factor of Xn+1 can be used as a generator polynomial, the degree of the factor
determines the number of parity bits.
206
Example: (7, k) cyclic code
3
1 (1 X ) (1 X X ) (1 X X )
7 2 3
X
(7,6) cyclic code
g(X )
coefficients
Xg(X )
G(X ) G H
k 1
X g(X )
G [P I ]
k
209
Fig.10.8 Encoder for an (n, k) cyclic code
D D D D D
n k 1
n k
g ( x) 1 1 o r 0
i
g i
X X , g i
i 1
m [m , m , ,m ]
0 1 k1
Communication
Systems Error-Control 211
Coding
Calculation of the Syndrome
T
s r H
• If the syndrome is zero, there are no transmission errors in the received word.
• If the syndrome is nonzero, the received word contains transmission errors that
require correction.
In the case of a cyclic code in systematic form, the syndrome can be calculated
easily.
Communication
Systems Error-Control 212
Coding
Syndrome polynomial
Suppose: Code word
Received code
Quotient q(X)
r(X )
Remainder s(X)
g(X )
r( X ) q( X ) g( X ) s( X ) (1 0 .4 7 )
213
Syndrome Calculator for (n,k) Cyclic Code
D D D
214
In this example, we have two primitive polynomials:
1 X X
2 3
1 X X
3
Here, we take
g ( X ) (1 X X )
3
1
7
X
h( X ) (1 X )(1 X X )
2 3
g ( x)
1 X X
2 4
215 X
Suppose: message sequence 1001
Message polynomial
nk
m(X ) X X
3 6
m(X ) 1 X
3
X
X X X
3 6 2
X
X X
3
1 X X 1 X X
3 3
Quotient remainder
a(X ) X X
3
b( X ) X X
2
Code polynomial
nk
c( X ) b( X ) X m(X ) X X X X
2 3 6
g ( X ) 1 X X
3
g(X )
Xg (X ) X X X
Xg(X ) 2 4
G(X )
3 5
g(X ) X X X
2 2
k1 X
X g ( X )
X g(X ) X X X
3 3 4 6
1 X X
3
1 1 0 1 0 0 0
0
X X X 0
2 4
1 1 0 1 0
G(X) G'
0 0
X X 0 1 1 0 1
2 3 5
X
0 0 0 1 1 0 1
X X X
3 4 6
Communication
Systems Error-Control 217
Coding
1 1 0 1 0 0 0 1 1 0 1 0 0 0
0 1 1 0 1 0 0 0 1 1 0 1 0 0
'
G G
0 0 1 1 0 1 0 1 1 1 0 0 1 0
0 0 0 1 1 0 1 1 0 1 0 0 0 1
If m=(1001) C=0111001
G [P Ik ] ( 1 0 .1 2 ) H [I P ]
T
( 1 0 .1 4 )
n k
1 0 0 1 0 1 1
H 0 1 0 1 1 1 0
0 0 1 0 1 1 1
218
Suppose: g(X)=1+X+X3 is given
Determine: encoder and syndrome calculator
n k 1
nk
g ( x ) 1 X
i
gi X
i 1
g(X)=1+X+X3
g1 1, g2 0
D D D
219
D D D
1001
2 0 011
0111001
3 0 111
4 1 0 1 1 parity bits
229
Suppose: received code is 0110001
Determine: is it correct?
1 0 0
0 1 0
0 0 1
s rH 0 1 1 0 0 0 1 1 1 0
T
1 0 1
0 1 1
1 1 1
1 1
0
221
Syndrome calculator
0110001
D D D
Fig 10.11
6 1 001
7Systems 0Communication 1 1 0 syndrome
Error-Control 222
Coding
Convolutional codes
Introduction
• Convolutional codes map information to code bits sequentially by
convolving a sequence of information bits with “generator”
sequences
- 224 -
Properties of convolutional codes
Consider a convolutional encoder. Input to the encoder is a
information bit sequence u(partitioned into blocks of length K):
u (u 0 , u1 ,)
(1) (2) ( K)
ui (ui ,ui ,ui ),
The encoder output the code bit sequence x(partitioned into blocks
of length N)
x ( x 0 , x1 ,)
(1) (2) (N )
xi (xi , xi ,xi ),
K
R
N
- 225 -
Example: Consider a rate ½ convolutional code with K=1 and N=2 defined by
the circuit:
xi
ui
(1)
The sequences (x 0 , x1(1) ,), (x0(2) , x1(2) ,) are generated as follows:
- 226 -
• The convolutional code is linear
- 227 -
Example: The rate ½ code defined by the circuit
xi
ui
00
00
A
11
00 A
01
11
B
10
1
A
00
0 01
A
11
11
01
10
B
10
- 229 -
Trellis
The tree graph can be contracted to a direct graph called trellis
of the convolutional code having at most S nodes at distance
i=0,1,… to the root
(MK 1)
The vector s i (s i ,si ,si
(0) (1)
)
combibing all register contents at time step i is called state of
the encoder at time step i.
The code bit block x iis clearly a function of s iand ui, only
- 230 -
Example:
00 00 00
A A A A
11 11 11 ui 0
ui 1
01 01
B B B
10 10
- 231 -
Example: Constructing a trellis section
xi
ui
x i(1) u i and x (i 2 ) u i s i
Two equations are required:
- 232 -
Trellis section:
0|00
0 0
0|01 1|11
1 1
1|10
s0 s1 s2 s L2 s L1
0|00 0|00 0|00
0 0 0 0 0
1 1 1 1 1
1|10 1|10 1|10
- 233 -
s0 s1 s2 s L2 s L1
0|00 0|00 0|00
0 0 0 0 0
1 1 1 1
1|10 1|10
- 234 -
State diagram
Example: Trellis of the rate ½ convolutional code
x i(1) u i and x i(2) u i s i
si s i 1
00
0 0
ui 0
01 11 ui 1
1 1
10
State diagram:
00
0
ui 0
11 01 ui 1
1
10
- 235 -
Description with submatrices
Definition: A convolutional code is a set C of code bit sequences
u (u 0 , u1 ,)
( j)
ui (ui ,ui ui GF(2)
xi(1) xi(1)
ui(1) ui(2)
xi(2) xi(2)
xi(3) xi(3)
ui(2) ui(1)
- 237 -
Generator matrix
x u G,
where
G 0 G1 G2 G M
G0 G1 G 2 GM
G
G0 G1 G 2 GM
M
xi u im G m ,i
m0
The generated convolutional code has rate R=K/N, memory
K*M, and constraint length K*(M+1)
- 238 -
Example:
The rate ½ code is given by
xi(1) u i and xi(2) ui si
G0 governs how ui affects xi (x x ) : G0 1 1
(1) (2)
i i
where
11 01
G 11 01
11
- 239 -
Description with polynomials
g1(1) (D) g1(2) (D) g1(N) (D)
g 2 (D) g 2 (D) g 2 (D)
(1) (2) (N)
G(D)
g (1) (D) g(2) (D) g (N) (D)
K K K
gi(j) (D)gi,0
(j)
gi,1( j) D1 gi,2 D gi,M
( j) 2
D ,gi,m GF(2)
( j) M (j)
x(D) u(D)G(D)
( j)
gi,m G m (i, j),i 1,, K, j 1,,N, m 0,,M,
- 240 -
Example:
The rate ½ code is given by
x u i and i u i s i
(1)
i
x (2)
1 (D) 1 0 D 1
xl(1) : g (1)
The polynomial
1 g (D) governs how ulm , m 0,1, affects
(2)
x1j
uj
s1j s2 j
x2 j
u (1,1,1,1,1,1,1)
(s10 , s20 ) (1,1)
x (11,11,11,11,11,11,11)
- 242 -
+1/+1+1
+1+1 +1+1
-1/-1-1
+1/+1-1
+1-1 +1-1
-1/-1+1
+1/+1-1
-1+1 -1/-1+1 -1+1
+1/+1+1
-1/-1-1
-1-1 -1-1
s1 j ,s2 j x1 j , x2 j s1j1 ,s2 j1
- 243 -
Hard decisions
y (+1+1,-1+1,+1+1,+1+1,+1+1,+1+1,+1+1)
j x1(m) 2 j y
(m) (m)
j y 1j x 2j
- 244 -
+1+1 -1+1 +1+1 +1+1 +1+1 +1+1 +1+1
- 245 -
Hard decisions
y (+1+1,-1+1,+1-1,+1+1,+1+1,+1+1,+1+1)
j x 1( mj ) y1 j x 2( mj ) y
(m)
2j
+1+1 +2 +2 0 +2 0 +2 +2 +4 +6 +8 +2 +10
0 -2 0 0 -2
-2 +2 +2 +2 0 +4 +2 +4 +6
-2 0
-2
+2 +4 0
-4 +2 +4 +6
+2 +4
-2 0
0 +2
0 0 0 -2 +2 +4 +2 +4
- 246 -
+1+1 -1+1 +1-1 +1+1 +1+1 +1+1 +1+1
+1+1 +2 +2 0 +2 0 +2 +2 +4 +6 +2 +8 +2 +10
0 -2 0 0 -2
-2 +2 +2 +2 0 +4 +6
-2 0
-2
+2 +4 0
-4 +2
+2 +4
-2 0
0 +2
0 0 0 -2 +2 +4
- 247 -
Hard decisions
y (+1+1,-1-1,-1+1,+1+1,+1+1,+1+1,+1+1)
j x1(m) 2 j y 2j
(m) (m)
j y 1j x
- 248 -
+1+1 -1-1 -1+1 +1+1 +1+1 +1+1 +1+1
- 249 -
Soft decisions
2, x(m)
ij y ij, GOOD channel
1/ 2, x (m)
yij , BAD channel
lij ij
1/2, x (m)
ij yij , BAD channel
2,xij(m) yij , GOOD channel
(m) x l y1j x2 j l2 j y2 j
j 1 j 1j
y (+1+1,-1-1,-1+1,+1+1,+1+1,+1+1,+1+1)
- 250 -
+1+1 -1-1 -1+1 +1+1 +1+1 +1+1 +1+1
GB BB GG GB BB GG GG
+2+0.5 -0.5-0.5 -2+2 +2+0.5 +0.5 +0.5 +2+2 +2+2
+1+1 +2.5 +2.5 +1 +3.5 +4 +7.5 +2.5 +10 +1 +11 +4 +14 +4 +18
0 -2.5 -1 -4 -2.5 -1 -4 -4
-2.5 +1.5 0 -0.5 0 +5 0 +9 0 +7 0 +10
0 0 0 0 0
0
0 0 0 0 0
-2.5 +1.5 +4 +5 +9
0 +13
0 0 0 0 0
+2.5
+4 +1 +4 +4
-2.5 -4 +1.5 -2.5 -0.5 -1 +5 -4 +9 -4 +7
- 251 -
+1+1 -1-1 -1+1 +1+1 +1+1 +1+1 +1+1
GB BB GG GB BB GG GG
+2 +0.5 -0.5-0.5 -2 +2 +2 +0.5 +0.5 +0.5 +2 +2 +2 +2
+1+1 +2.5 +2.5 +1 +3.5 +4 +7.5 +2.5 +10 +1 +11 +4 +14 +4 +18
0 -2.5 -1 -4 -2.5
- 252 -
Spread Spectrum
Introduction to Spread Spectrum
– time (t) c
– frequency (f) t c
– code (c) t
s1
f
• Goal: multiple use s2
f
of a shared medium
c
t
• Important: guard spaces needed!
s3
f
Frequency multiplex
• Separation of spectrum into smaller frequency bands
• Channel gets band of the spectrum for the whole time
• Advantages:
k3 k4 k5 k6
– no dynamic coordination needed
– works also for analog signals c
f
• Disadvantages:
– waste of bandwidth
if traffic distributed unevenly
– inflexible
– guard spaces
t
Time multiplex
• Channel gets the whole spectrum for a certain
amount of time
• Advantages:
– only one carrier in the
medium at any time
– throughput high even k1 k2 k3 k4 k5 k6
c
for many users
f
• Disadvantages:
– precise
synchronization
necessary
t
Time and frequency multiplex
• A channel gets a certain frequency band for a certain
amount of time (e.g. GSM)
• Advantages:
– better protection against tapping
– protection against frequency
selective interference
– higher data rates compared to k1 k2 k3 k4 k5 k6
code multiplex c
• Precise coordination f
required
t
Code multiplex
k1 k2 k3 k4 k5 k6
t
Spread Spectrum Technology
• Problem of radio transmission: frequency dependent
fading can wipe out narrow band signals for duration of
the interference
• Solution: spread the narrow band signal into a broad
band signal using a special code
interference
spread power signal
power signal spread
interference
detection at
receiver
f f
Spread Spectrum Technology
• Side effects:
– coexistence of several signals without dynamic coordination
– tap-proof
• Alternatives: Direct Sequence (DS/SS), Frequency
Hopping (FH/SS)
• Spread spectrum increases BW of message signal by a
factor N, Processing Gain
B
B ss
P r o c e s s in g G a i n N ss
1 0 lo g 1 0
B B
Effects of spreading and interference
user signal
broadband interference
narrowband interference
P P
i) ii)
f f
P sender P P
iii) iv) v)
f f f
receiver
Spreading and frequency selective
fading
channel
quality
2
narrowband
1 5 6
3 channels
4
Narrowband frequency
guard space
signal
channel
quality
2
2
2
2
2
1 spread spectrum
channels
spread frequency
spectrum
DSSS (Direct Sequence Spread Spectrum) I
transmitter
Spread spectrum
Signal y(t)=m(t)c(t) transmit
user data signal
X modulator
m(t)
chipping radio
sequence, c(t) carrier
receiver correlator
sampled
received products data
sums
signal demodulator X integrator decision
radio
carrier
Chipping sequence,
c(t)
DS/SS Comments III
• Pseudonoise(PN) sequence chosen so that its
autocorrelation is very narrow => PSD is very
wide
– Concentrated around < Tc
– Cross-correlation between two user’s codes is very
small
DS/SS Comments IV
• Secure and Jamming Resistant
– Both receiver and transmitter must know c(t)
– Since PSD is low, hard to tell if signal present
– Since wide response, tough to jam everything
• Multiple access
– If ci(t) is orthogonal to cj(t), then users do not interfere
• Near/Far problem
– Users must be received with the same power
FH/SS (Frequency Hopping Spread
Spectrum) I
• Discrete changes of carrier frequency
– sequence of frequency changes determined via PN sequence
• Two versions
– Fast Hopping: several frequencies per user bit (FFH)
– Slow Hopping: several user bits per frequency (SFH)
• Advantages
– frequency selective fading and interference limited to short period
– uses only small portion of spectrum at any time
• Disadvantages
– not as robust as DS/SS
– simpler to detect
FHSS (Frequency Hopping Spread
Spectrum) II
Tb
user data
0 1 0 1 1 t
f
Td
f3
slow
f2 hopping
(3 bits/hop)
f1
Td t
f
f3 fast
f2 hopping
(3 hops/bit)
f1
t
Tb: bit period Td: dwell time
Code Division Multiple Access (CDMA)
• Multiplexing Technique used with spread spectrum
• Start with data signal rate D
– Called bit data rate
• Break each bit into k chips according to fixed pattern specific
to each user
– User’s code
• New channel has chip data rate kD chips per second
• E.g. k=6, three users (A,B,C) communicating with base
receiver R
• Code for A = <1,-1,-1,1,-1,1>
• Code for B = <1,1,-1,-1,1,1>
• Code for C = <1,1,-1,1,1,-1>
CDMA Example
CDMA Explanation
• Consider A communicating with base
• Base knows A’s code
• Assume communication already synchronized
• A wants to send a 1
– Send chip pattern <1,-1,-1,1,-1,1>
• A’s code
• A wants to send 0
– Send chip[ pattern <-1,1,1,-1,1,-1>
• Complement of A’s code
• Decoder ignores other sources when using A’s code to decode
– Orthogonal codes
CDMA for DSSS
• n users each using different orthogonal PN
sequence
• Modulate each users data stream
– Using BPSK
• Multiply by spreading code of user
CDMA in a DSSS Environment
Seven Channel CDMA Encoding and
Decoding
FHSS (Frequency Hopping Spread Spectrum) III
frequency hopping
synthesizer sequence
receiver
received data
signal demodulator demodulator
hopping frequency
sequence synthesizer
Applications of Spread Spectrum
• Cell phones
– IS-95 (DS/SS)
– GSM
• Global Positioning System (GPS)
• Wireless LANs
– 802.11b
Performance of DS/SS Systems
• Pseudonoise (PN) codes
– Spread signal at the transmitter
– Despread signal at the receiver
• Ideal PN sequences should be
– Orthogonal (no interference)
– Random (security)
– Autocorrelation similar to white noise (high at =0
and low for not equal 0)
PN Sequence Generation
• Codes are periodic and generated by a shift register and XOR
• Maximum-length (ML) shift register sequences, m-stage shift
register, length: n = 2m – 1 bits
R()
-1/n Tc nTc
-nTc
Output
+
Generating PN Sequences
R c m c n
cnm 6 1,6
L n 1
1 m 0 8 1,5,6,7
1/ L 1 m L 1
Problems with m-sequences
• Cross-correlations with other m-sequences
generated by different input sequences can be
quite high
• Easy to guess connection setup in 2m samples
so not too secure
• In practice, Gold codes or Kasami sequences
which combine the output of m-sequences
are used.
Detecting DS/SS PSK Signals
transmitter
Spread spectrum
Signal y(t)=m(t)c(t) transmit
Bipolar, NRZ signal
m(t) X X
PN
sequence, c(t) sqrt(2)cos (ct + )
receiver
received z(t) w(t) data
signal
X X LPF integrator decision
x(t)
B
B ss T
P r o c e s s in g G a in N ss
1 0 lo g 10 b
B B Tc
Tb
Tc
Multiple Access Performance
• Assume K users in the same frequency band,
• Interested in user 1, other users interfere
4 6
1
3 2
Signal Model
• Interested in signal 1, but we also get signals
from other K-1 users:
x k t 2 m k t k c k t k c o s c t k k
2 m k t k c k t k c o s c t k k k c k
• At receiver,
K
x t x1 t x k t
k2
Interfering Signal
• At User 1: I1
Tb
0 m
1 t c 1 t c1 t d t
• Ideally, spreading codes are Orthogonal:
Tb Tb
0 1 t c 1 t d t A 0 t k c 1 t d t 0
c c
k
Multiple Access Interference (MAI)
1
Pb Q
K 1 3N 2Eb
N=8 N=32
Near/Far Problem (I)
• Performance estimates derived using assumption that all
users have same power level
• Reverse link (mobile to base) makes this unrealistic since
mobiles are moving
• Adjust power levels constantly to keep equal
k 1
Near/Far Problem (II)
1 1
Pb Q
K ( k) (1)
N 2 E b (1)
k2 Eb 3E b
• Received signal sampled at the rate 1/Ts> 2/Tc for detection and
synchronization
• Fed to all M RAKE fingers. Interpolation/decimation unit provides a data
stream on chiprate 1/Tc
• Correlation with the complex conjugate of the spreading sequence and
weighted (maximum-ratio criterion)summation over one symbol