S.72-3320 Advanced Digital Communication (4 CR) : Convolutional Codes
S.72-3320 Advanced Digital Communication (4 CR) : Convolutional Codes
Convolutional Codes
Targets today
Convolutional encoding
k bits
(n,k,L)
(n,k,L)
encoder
encoder
n bits
input bit
message bits
encoded bits
n(L+1) output bits
x '' j m j2 m j
memory
depth L
= number
of states
x ' j m j 3 m j 2 m j
x '' j m j 3 m j 1 m j
x ''' j m j 2 m j
Timo O. Korhonen, HUT Communication Laboratory
Generator sequences
k bits
(n,k,L)
(n,k,L)
encoder
encoder
n bits
g (1)
0
g (1)2
g (1)m
(2,1, 2) encoder
g ( n ) [ g 0( n )
g1( n ) L
g m( n ) ]
(1)
g [1 0 11] Note that the generator sequence length
(2)
g [1111] exceeds register depth always by 1
xx((kk))yy((uukk))
xxyy((uu))
kA
kAxy xy
v u * g , v u * g ... v u * g ( j )
input bit
u (u0 , u1 ,L )
where m L 1, ul i @0, l i
L2
ul 2
g 2(1)
(1)
3
ul 3
j branches
(1)
ul 2 ul 3
vl ul
( 2)
vl ul ul 1 ul 2 ul 3
encoder output:
g (1)
g (02 ) g1(1) g1( 2 )
0
11
3
00 01
11 01
12
1 23
1 411 4 2 4 410 3
01
(1) ( 2 )
gm gm
vl( j ) ul g 0( j )
ul 1 g1( j )
... ul m g m( j )
ul m
x ' j 1 0 0
x '' j 1 0
x ' j 0 1 0
x ' j m j 2 m j1 m j
x '' j m j2 m j
x '' j 0 0
x ' j 0 1 1
x '' j 0 1
10
Code trellis
State diagram
11
Each new block of k input bits causes a transition into new state
Hence there are 2k branches leaving each state
Assuming encoder zero initial state, encoded word for any input of k
bits can thus be obtained. For instance, below for u=(1 1 1 0 1),
encoded word v=(1 1, 1 0, 0 1, 0 1, 1 1, 1 0, 1 1, 1 1) is produced:
Verify that you obtain the same result!
Input state
12
T ( X ) Ai X i
i
13
branch
gain
weight: 2
weight: 1
T ( X ) Ai X i
i
X 6 3X 7 5X 8
11X 9 25 X 10 ....
Timo O. Korhonen, HUT Communication Laboratory
14
The minimum weight of all the paths in the state diagram that
diverge from and remerge with the all-zero state S 0
T ( X ) Ai X i
i
X 6 3X 7 5X 8
11X 9 25 X 10 ....
d free 6
Timo O. Korhonen, HUT Communication Laboratory
Code gain*:
G kd / 2n R d / 2 1
c
free
free
15
10log ( Rc d / 2) dB
10
free
16
p ( y , x) p ( y j | x j )
j 0
The most likely path through the trellis will maximize this metric.
Often ln() is taken from both sides, because probabilities are often
(=distance
calculation)
bit
decisions
ln p ( y , x) ln p( y j xmj )
j 1
17
a
b
states
c
d
decoder outputs
if this path is selected
18
p ( y , x) p ( y j | x j )
j 0
ln p ( y , x) j0 ln p( y j | x j )
errors
correct
19
20
21
2 nodes, determined
by memory depth
branch of larger
metric discarded
22
y 01101111010001
6 44states
7 4 48
(Note that for this encoder code rate is 1/2 and memory depth equals L = 2)
Timo O. Korhonen, HUT Communication Laboratory
23
(1)
Smaller accumulated
metric selected
(1)
(1)
(1)
(2)
1
(0)
24
In the previous example it was assumed that the register was finally
filled with zeros thus finding the minimum distance path
In practice with long code words zeroing requires feeding of long
sequence of zeros to the end of the message bits: this wastes channel
capacity & introduces delay
To avoid this path memory truncation is applied:
Trace all the surviving paths to the
depth where they merge
Figure right shows a common point
at a memory depth J
J is a random variable whose applicable
magnitude shown in the figure (5L)
has been experimentally tested for
negligible error rate increase
Note that this also introduces the
delay of 5L!
25
Lessons learned
26