Principles of Communications
Principles of Communications
draft
2
Contents
1 Introduction 5
3
4.3 Radio Channels . . . . . . . . . . . . . . . . . . . . 42
4.4 Discrete Channels Models . . . . . . . . . . . . . . 44
4
Chapter 1
Introduction
5
CHAPTER 1. INTRODUCTION
some control numbers are inserted. These numbers are used to verify
if all numbers were received correctly.
Before transmission signals are discretized, quantized and com-
pressed due to reduce amount of transmitted data. That is why, source
and channel coding, data decomposition into separate packets are of
primary important in telecommunication theory.
Messages and signals are transmitted by electromagnetic waves
that propagates through different media — wires, fiber optic and radio
lines. So, telecommunication theory also examines various models of
transmission lines, encounters noise influence.
Signals that are transmitted through specific communication lines
and channels must be adopted to line properties. For example, to
transmit digital signal through communication line, it must be con-
verted to sequence of specific continuous time functions, that physi-
cally can propagate over that line.
Signals that propagate across communication line are distorted.
During reception of distorted signals complex signal processing pro-
cedures are applied to as correctly as possible restore original infor-
mation.
Other important problem in telecommunication theory is division
of communication lines as shared resources into different channels.
Division into channels is typical technology of traditional telephone
network. When two people speaking by phone, between their phones
is created communication channel over which signals are transmitted.
In telephone network number of communication channels is al-
ways less then number of subscribers. Common communication chan-
nels are switched, channels of separate lines are connected together to
make one continuous channel and for a some time are attached to spe-
cific subscriber. Temporary channel allocation problems investigates
6
CHAPTER 1. INTRODUCTION
teletrafic theory.
Other way to share common communication resources is trans-
mission of messages and signals from many subscribers over the same
transmission media and the same channels. In that case, message are
divided into packets then transmission media or channels remain un-
changed.
7
CHAPTER 1. INTRODUCTION
8
Chapter 2
9
CHAPTER 2. SIGNALS AND INFORMATION
s(t)
The Fourier transform relates power spectral density with the cor-
relation function Ks (τ ) of random signal:
10
CHAPTER 2. SIGNALS AND INFORMATION
when τ is significant. The more adjacent samples are related the less
changes correlation function. therefore, correlation function shows
speed of change of process statistical dependency.
s(t)
0 t
s(t)
× E
DL
11
CHAPTER 2. SIGNALS AND INFORMATION
2.2 Discretization
In modern communication systems signals are broadcasted in digi-
tal form. Therefore, before broadcasting continuous signals are dis-
cretized. According to discretization theorem, signal with maximal
frequency F , can be represented using discrete time samples spaced
by ∆t = 1/2F (Fig. 2.4).
12
CHAPTER 2. SIGNALS AND INFORMATION
s(n) ∆s(n)
+ Quantization
Prediction +
ŝ(n)
13
CHAPTER 2. SIGNALS AND INFORMATION
r = R · a. (2.6)
14
CHAPTER 2. SIGNALS AND INFORMATION
15
CHAPTER 2. SIGNALS AND INFORMATION
1 − log(1/n)
r = 1 − H/Hmax = n . (2.11)
X
pi log pi
i=1
16
CHAPTER 2. SIGNALS AND INFORMATION
I = h1 − h2 . (2.14)
17
CHAPTER 2. SIGNALS AND INFORMATION
18
Chapter 3
19
CHAPTER 3. CODES AND CODING
is necessary to describe all set of code words, each code word must
be compared with all others. The minimal Hamming distance d0 is
important measure of selected code.
In primary codes all possible words are in use. Minimal Hamming
distance for such codes is d0 = 1. When during transmission noise
corrupts one symbol of code word (1 is received as 0, or 0 is received
as 1), all code word is received erroneously.
Code word error probability in that case is expressed as perr =
k · p0 , here p0 – error probability of one symbol (bit). As it is clear,
if number of symbols in code word k is increased, code word error
probability is increased proportionally.
Error probabilities can be reduced by increasing signal to noise
power ratio (decreasing probability p0 ) and using error proof codes
that detects and corrects errors.
There are two groups of error proof codes — block and continu-
ous. Block codes — when message is splited into blocks of specific
number of symbols that are appended by control bits. In continuous
codes control bits are inserted among message bits continuously.
Detect and correct errors is possible when code has unused words
— not all possible words are allowed. Sender can not use forbidden
code words.
Error correction capability of some code is described by minimal
Hamming distance d0 . If d0 = 1 — error correction is impossible.
20
CHAPTER 3. CODES AND CODING
21
CHAPTER 3. CODES AND CODING
22
CHAPTER 3. CODES AND CODING
bits will be the same as received parity bits and all syndrome bits will
be 0. If during transmission code word was corrupted, then generated
parity bits will be different from received ones and syndrome code
will have ones. That means that code word is received with errors. If
selected code has minimal Hamming distance d0 ≥ 3, then syndrome
code lets determine place of wrong bits in the code.
Example.
simplest systematic code is obtained just adding one parity bit (code
(n, n − 1)). Parity bit is obtained by adding in modulo-2 all message
bits an−1 = a0 ⊕ a1 ⊕ . . . ⊕ an−2 . Suppose, code 0111000101 is
received, then generated parity bit is 1. Notice, that number of ones
in such code always is even and only odd number of errors can be
detected. However, place of error is unknown.
More complex example — code (7, 4). Parity bits can be obtained
as follows: a4 = a0 ⊕ a1 ⊕ a2 , a5 = a0 ⊕ a1 ⊕ a3 , a6 = a1 ⊕ a2 ⊕ a3 .
Such code has minimal Hamming distance 3, and lets find place of an
error.
23
CHAPTER 3. CODES AND CODING
x3 + x2 + 1
⊕
x + 1 .
x3 + x2 + x
(x3 + x2 + 1)(x + 1) = x4 + x3 + x + x3 + x2 + 1 = x4 + x2 + x + 1.
24
CHAPTER 3. CODES AND CODING
x3 + x2
⊕
x3 + x2
0 + x + 1
⊕
x+ 1
0
In theory of cyclic codes very important is Galois proposition
xn = 1 that gives such property of multiplication of polynomial by
x: Multiplication of algebraic polynomial by variable x corresponds
to cyclic shift (rotation) of code bits:
F̂ (x) = x · F (x) = an−2 xn−1 + . . . + a1 x2 + a0 x + an−1 . (3.2)
For example, code 0101110 is described as polynomial x5 + x3 +
x + x. After multiplication by x, we obtain polynomial x6 + x4 +
2
25
CHAPTER 3. CODES AND CODING
26
CHAPTER 3. CODES AND CODING
Example. Let us create cyclic code (7, 4). There are two possible
different cyclic codes (7, 4). Generating polynomials are G1 (x) =
x3 +x2 +1 and G2 (x) = x3 +x+1. Let us select G1 (x). For example
message sequence 0111 is expressed as polynomial M (x) = x2 +
x + 1. After multiplying that polynomial by xn−k = x3 , we obtain
x3 M (x) = x5 + x4 + x3 . Now let us divide obtained polynomial by
generating polynomial:
x5 + x4 + x3 | x3 + x + 1
⊕ x2 + 1
x5 + x4 + x2
x3 + x2 + 1
⊕
x3 + x2 + 1
27
CHAPTER 3. CODES AND CODING
Information 101...
+
J1
gr−1 g2 g1
Rr−1 + R2 + R1 + R0
Parity bits
J2
Message bits
During k time intervals message bits forms parity bits. After that
for r time intervals both switches are switch over, feedback loop is
terminated and to the output of the encoder parity bits from registers
are send.
Cyclic encoders and decoders are simplest devices from all other
error proof code generators.
28
CHAPTER 3. CODES AND CODING
If syndrome code has at least one 1 it means that word received with
error.
In hardware error detection is carried out by once more encoding
received code word. Encoding is done by the same encoder as in Fig.
3.1. Difference only that switches J1 and J2 is always as shown. If
there is no error, after n time intervals all registers contains verifying
code 00...0. If code word is received with errors, after n time
intervals some bits of verifying code will be 1 — syndrome code id
formed.
Syndrome code has n − k = r bits. There are totally 2r − 1 pos-
sible different syndrome codes. It means that cyclic code can correct
2r −1 different errors. Syndrome code also shows place of incorrectly
received bits. By adding 1 in modulo-2 to 0 or 1 corresponding bit
changes. Therefore, error formation mathematically can be expressed
as summation of two binary codes or corresponding polynomials:
Here F̂ (x) – polynomial that describes code word with error, F (x)
– polynomial that describes allowed code words and E(x) – error
code polynomial. It has n symbols and 1 are at positions that were
received with errors. For example, error in first bit is described by
E(x) = 100...0, error in second bit — E(x) = 010...0, error
in first and second bits — E(x) = 110...0. It is obvious, that
syndrome code is predetermined by polynomial E(x). According to
that proposition it is easy to form all error codes, and by dividing
them from generating polynomial syndrome codes table is obtained.
In hardware it can be done by sending to encoder error codes. Table
3.3 presents syndrome codes of code (7, 4) that was formed according
29
CHAPTER 3. CODES AND CODING
2t + 1 ≤ d0 ≤ 2t + 2. (3.6)
30
CHAPTER 3. CODES AND CODING
31
CHAPTER 3. CODES AND CODING
g1 +
g2 1
in out
R1 R2 R3 2
3
g3 +
32
CHAPTER 3. CODES AND CODING
33
CHAPTER 3. CODES AND CODING
34
CHAPTER 3. CODES AND CODING
2 log e
0
−5 −4 −3 −2 −1 0
log p0
Figure 3.3: Effectiveness of cyclic code (7, 4).
35
CHAPTER 3. CODES AND CODING
36
Chapter 4
Communication Lines,
Channels and their
Models
37
CHAPTER 4. COMMUNICATION LINES, CHANNELS AND
THEIR MODELS
C = 2F k. (4.1)
38
CHAPTER 4. COMMUNICATION LINES, CHANNELS AND
THEIR MODELS
4.2 Noise
As noise are considered all other oscillations and signals that interfere
with given signal. In general noise could be additive and multiplica-
tive.
Additive noise is noise that simply adds to signal and signal af-
fected by such noise can be expressed as follows:
x(t) = s(t) + n(t). (4.3)
39
CHAPTER 4. COMMUNICATION LINES, CHANNELS AND
THEIR MODELS
z2
1
W (z) = √ exp − 2 . (4.6)
2πσ 2σ
Here A(t) and φ(t) – respectively random amplitude and phase. Ex-
ample of the correlation function of the harmonic noise is decaying
cosinusoidal function:
40
CHAPTER 4. COMMUNICATION LINES, CHANNELS AND
THEIR MODELS
41
CHAPTER 4. COMMUNICATION LINES, CHANNELS AND
THEIR MODELS
42
CHAPTER 4. COMMUNICATION LINES, CHANNELS AND
THEIR MODELS
hBS Diffraction
Reflection
Direct wave
hMS
While mobile stations moves then signal delay and other reception
conditions also changes. Fig. 4.2 illustrates change of signal level
when mobile station moves only few meters.
43
CHAPTER 4. COMMUNICATION LINES, CHANNELS AND
THEIR MODELS
44
CHAPTER 4. COMMUNICATION LINES, CHANNELS AND
THEIR MODELS
1 0 1
t t+T t + 2T
a1 a2 a3 ai am
b1 b2 b3 bj bn
45
CHAPTER 4. COMMUNICATION LINES, CHANNELS AND
THEIR MODELS
46
Chapter 5
47
CHAPTER 5. SIGNAL RECEPTION IN NOISE
48
CHAPTER 5. SIGNAL RECEPTION IN NOISE
For further analysis lets use discrete time by dividing time interval
T into L parts. Then Eq. (5.4) can be rewritten:
In white noise case any two samples n(l1 ) and n(l2 ) in discrete
time axis are independent, that is way L dimensional probability den-
sity function of noise n(l) is expressed as product of one dimensional
functions:
L−1
L ( )
1 1 X 2
WL (n0 , n1 , . . . , nL−1 ) = √ exp − 2 nl . (5.6)
2πσ 2σ
l=0
Expressing n(l) = x(l)−si (l) (Eq. (5.5)) and substituting into Eq. (5.6)
following expression is obtained:
49
CHAPTER 5. SIGNAL RECEPTION IN NOISE
50
CHAPTER 5. SIGNAL RECEPTION IN NOISE
When probabilities p(si ) and p(sk ) are equals p(si ) = p(sk ) and
equals are energies of all signals Ek = Ei , then threshold ζik = 0
disappears and decision rule of optimal receiver takes very compact
form:
Kk > Ki , i = 0, 1, . . . , n, i 6= k. (5.13)
That rule means: the k-signal is considered received if correlation
Kk of at that time observed realization x(l) with copy of k-signal is
greater then correlations with copies of other signals.
Presented version of the algorithm of the optimal receiver can be
directly applied when received signal is discretized and processed as
digital signal. The continuous time version of the algorithm of the
optimal receiver also can be written using Eq. (5.9) equation only
correlations and energies are calculated differently:
Z T Z T
Kk = x(t)sk (t)dt and Ki = x(t)si (t)dt, (5.14)
0 0
Z T Z T
Ek = )s2k (t)dt and Ei = )s2i (t)dt. (5.15)
0 0
51
CHAPTER 5. SIGNAL RECEPTION IN NOISE
T
R
x
Decision
T
x(t)
s1
T
R
x T
sn
52
CHAPTER 5. SIGNAL RECEPTION IN NOISE
Here
Z T Z T
Ekk = sk (t)sk (t)dt, Ekk = sk (t)si (t)dt, (5.17)
0 0
Z T Z T
Θk = n(t)sk (t)dt, Θi = n(t)si (t)dt.
0 0
W (x)
S x
53
CHAPTER 5. SIGNAL RECEPTION IN NOISE
54
CHAPTER 5. SIGNAL RECEPTION IN NOISE
If signals are not orthogonal but opposite sk (t) = −si (t), then the
similar formula is obtained:
√
popposite = Q 2h (5.23)
1 2 3 4 h
0
-2
-4 orthogonal
log(p)
-6
-8 opposite
-10
-12
55
CHAPTER 5. SIGNAL RECEPTION IN NOISE
56
CHAPTER 5. SIGNAL RECEPTION IN NOISE
are short time correlations between additional noise µ(t) and signals
sk (t) and si (t). More detailed analysis of the influence of additional
noise depends on it’s properties and relations with transmitted signal.
For example, if harmonic signals are transmitted, the receiver also
receives harmonic noise An cos(ωn t + φ),
57
CHAPTER 5. SIGNAL RECEPTION IN NOISE
58
CHAPTER 5. SIGNAL RECEPTION IN NOISE
Eq. (5.30) shows that noise dispersion influences not filters impulse
response but its norm:
Z T
2
[g(t)] = g 2 (T − t)dt.
0
can became equality only when f (x) = g(x). Thus quantity from
Eq. (5.31) becomes maximal when equality si (t) = g(T − t) will be
satisfied. That equality can be rewritten more purposive way:
59
CHAPTER 5. SIGNAL RECEPTION IN NOISE
by integral
Z T Z T
x(t)g(T − t)dt = x(t)si (t)dt = Ki
0 0
that coincide with one from Eq. (5.14) that represents correlation Ki
between signal with noise and copy of signal. That is why correlators
in Fig. 5.1 that were made from voltage multipliers and integrators,
can be replaced by matched filters (Fig. 5.4).
MF1
x(t) Decisions
MFn
60
CHAPTER 5. SIGNAL RECEPTION IN NOISE
sx (t)
R sout (t)
T −1
(a)
t0 t0 + T t t0 t0 + T t0 + 2T t
(b) (c)
61
CHAPTER 5. SIGNAL RECEPTION IN NOISE
62
Chapter 6
Multiplexing of
Communication Lines
63
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES
I
X
SM (t) = Si (t). (6.1)
i=1
I
X
Sk ≡ Ci Si (t), i 6= k. (6.2)
i=1
64
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES
(FDMA). The other method uses division of time axis into period-
ically repetitive intervals. Each signals has some sequence of peri-
odical time intervals. That method is called Time Division Multi-
ple Access (TDMA). There are systems that uses both frequency and
time division of communication channel. Now in radio communica-
tion systems is spreading Code Division Multiple Access (CDMA) of
channels when different signals have different codes.
65
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES
e1
M1
SM (t)
S
en
Mn
Multi-user receiver (Fig. 6.3) receives total signal ŜM (t) and dif-
ferent modulated signals are separated using band-pass filters (BPF)
and detected.
ê1
MF1 D1
ŜM (t)
ên
MFn Dn
66
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES
of orthogonal functions:
Z T
cos (ωi t − φi ) cos (ωj t − φj )dt = 0, i 6= j. (6.4)
0
67
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES
C1
C2
s(t) Fourier
DAC
transformer
Cn
68
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES
Tc Tc
s1 (t)
t11 t12 t
(a)
s2 (t)
t21 t22 t
(b)
s(t)
t
(c)
69
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES
70
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES
71
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES
100110
x x
Demodulator x Filter x
72
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES
73
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES
74
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES
gr−1 g2 g1
Rr−1 + R2 + R1 + R0
by generating polynomial:
Properties of m sequences:
75
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES
76