S.72-3320 Advanced Digital Communication (4 CR) : Convolutional Codes
S.72-3320 Advanced Digital Communication (4 CR) : Convolutional Codes
Convolutional Codes
1
Targets today
Why to apply convolutional coding?
Defining convolutional codes
Practical encoding circuits
Defining quality of convolutional codes
Decoding principles
Viterbi decoding
2
Convolutional encoding input bit
k bits (n,k,L) n bits message bits
(n,k,L)
encoder
encoder encoded bits
n(L+1) output bits
3
Example: Convolutional encoder, k = 1, n = 2
memory
depth L
= number
of states
(n,k,L) = (2,1,2) encoder
xout x '1 x ''1 x '2 x ''2 x '3 x ''3 ...
Convolutional encoder is a finite state machine (FSM) processing
information bits in a serial manner
Thus the generated code is a function of input and the state of the FSM
In this (n,k,L) = (2,1,2) encoder each message bit influences a span of
C= n(L+1)=6 successive output bits = constraint length C
Thus, for generation of n-bit output, we require in this example n shift
registers in k = 1 convolutional encoder
Timo O. Korhonen, HUT Communication Laboratory
4
Example: (n,k,L)=(3,2,1) Convolutional encoder
(2,1, 2) encoder
g ( n ) [ g 0( n ) g1( n ) g m( n ) ]
g [1 0 11] Note that the generator sequence length
(1)
(2)
g [1111] exceeds register depth always by 1
Generator sequences specify convolutional code completely by the
associated generator matrix
Encoded convolution code is produced by matrix multiplication of the
input and the generator matrix
Timo O. Korhonen, HUT Communication Laboratory
6
Convolution point of view in encoding and
generator matrix
Encoder outputs are formed by modulo-2 xxyy((uu)) xx((kk))yy((uukk))
kA
kAxy xy
discrete convolutions:
v (1) u * g (1) , v ( 2) u * g ( 2 ) ... v ( j ) u * g ( j )
input bit
where u is the information sequence:
u (u0 , u1 ,)
Therefore, the l:th bit of the j:th output branch is*
vl( j ) i 0 ul i gl( j ) ul g 0( j ) ul 1 g1( j ) ... ul m g m( j )
m
n(L+1) output bits
where m L 1, ul i 0, l i
g [1 0 11]
(1)
g [1 0 11]
(1)
g ( 2 ) [111 1]
00 01
11 11 01
g (1) g (02 ) g1(1) g1( 2 ) 11 10
0 01
gm gm
(1) ( 2 )
vl( j ) ul g 0( j )
ul 1 g1( j )
... ul m g m( j ) ul m
8
Timo O. Korhonen, HUT Communication Laboratory
S.Lin, D.J. Costello: Error Control Coding, II ed, p. 456
9
Representing convolutional codes: Code tree
Number of braches
deviating from each node
equals 2k
x ' j 1 0 0
x '' j 1 0
(n,k,L) = (2,1,2) encoder x ' j 0 1 0
x ' j m j2 m j1 m j x '' j 0 0
x '' j m j2 m j
xout x '1 x ''1 x '2 x ''2 x '3 x ''3 ...
m j 2 m j 1 0 1
State diagram
Code trellis
11
Inspecting state diagram: Structural properties of
convolutional codes
Each new block of k input bits causes a transition into new state
Hence there are 2k branches leaving each state
Assuming encoder zero initial state, encoded word for any input of k
bits can thus be obtained. For instance, below for u=(1 1 1 0 1),
encoded word v=(1 1, 1 0, 0 1, 0 1, 1 1, 1 0, 1 1, 1 1) is produced:
Verify that you obtain the same result!
Input state
12
Code weight, path gain, and generating function
The state diagram can be modified to yield information on code distance
properties (= tells how good the code is to detect or correct errors)
Rules (example on the next slide):
– (1) Split S0 into initial and final state, remove self-loop
– (2) Label each branch by the branch gain Xi. Here i is the weight* of the n
encoded bits on that branch
– (3) Each path connecting the initial state and the final state represents a
nonzero code word that diverges and re-emerges with S0 only once
The path gain is the product of the branch gains along a path, and the weight of
the associated code word is the power of X in the path gain
Code weigh distribution is obtained by using a weighted gain formula to
compute its generating function (input-output equation)
T ( X ) Ai X i
i
*In linear codes, weight is the number of ‘1’:s in the encoder output
Timo O. Korhonen, HUT Communication Laboratory
13
branch weight: 2 weight: 1
gain
T ( X ) Ai X i
i
X 6 3X 7 5X 8
11X 9 25 X 10 ....
Timo O. Korhonen, HUT Communication Laboratory Where does these terms come from?
14
Distance properties of convolutional codes
Code strength is measured by the minimum free distance:
d free min d ( v ', v '') : u ' u ''
where v’ and v’’ are the encoded words corresponding information
sequences u’ and u’’. Code can correct up to t d free / 2 errors.
The minimum free distance dfree denotes:
The minimum weight of all the paths in the state diagram that
diverge from and remerge with the all-zero state S0
The lowest power of the code-generating function T(X)
T ( X ) Ai X i
i
X 6 3X 7 5X 8
11X 9 25 X 10 ....
d free 6
Code gain*:
G kd / 2n R d / 2 1
c free c free
Timo O. Korhonen, HUT Communication Laboratory
10log ( Rc d / 2) dB
10 free
16
Decoding of convolutional codes
Maximum likelihood decoding of convolutional codes means finding the
code branch in the code trellis that was most likely transmitted
Therefore maximum likelihood decoding is based on calculating code
Hamming distances for each branch potentially forming encoded word
Assume that the information symbols applied into an AWGN channel are
equally alike and independent
Let’s denote by x encoded symbols (no errors) and by y received
(potentially erroneous) symbols: x x0 x1 x2 ...x j ... y y0 y1 ... y j ...
Probability to decode the symbols is then received code: y Decoder
(=distance
non -
p ( y , x) p ( y j | x j ) erroneous code: x calculation)
j 0
The most likely path through the trellis will maximize this metric. bit
decisions
Often ln() is taken from both sides, because probabilities are often
small numbers, yielding:
ln p ( y , x) ln p ( y j xmj )
j 1
(note this corresponds equavalently also the smallest Hamming distance)
17
Example of exhaustive maximal likelihood detection
Assume a three bit message is transmitted and encoded by (2,1,2)
convolutional encoder. To clear the decoder, two zero-bits are appended
after message. Thus 5 bits are encoded resulting 10 bits of code. Assume
channel error probability is p = 0.1. After the channel 10,01,10,11,00 is
produced (including some errors). What comes after the decoder, e.g. what
was most likely the transmitted code and what were the respective message
bits?
states
c
decoder outputs
if this path is selected
d
Timo O. Korhonen, HUT Communication Laboratory
18
p ( y , x) p ( y j | x j )
j 0
ln p ( y, x) j0 ln p( y j | x j )
20
The Viterbi algorithm
Problem of optimum decoding is to find the minimum distance path
from the initial state back to the initial state (below from S0 to S0). The
minimum distance is one of the sums of all path metrics from S0 to S0
Exhaustive maximum likelihood
method must search all the paths
in phase trellis (2k paths emerging/
entering from 2 L+1 states for
an (n,k,L) code)
The Viterbi algorithm gets
improvement in computational
efficiency via concentrating into
survivor paths of the trellis
21
The survivor path
Assume for simplicity a convolutional code with k=1, and thus up to 2k = 2
branches can enter each state in trellis diagram
Assume optimal path passes S. Metric comparison is done by adding the
metric of S1 and S2 to S. At the survivor path the accumulated metric is
naturally smaller (otherwise it could not be the optimum path)
(Note that for this encoder code rate is 1/2 and memory depth equals L = 2)
23
The maximum likelihood path
Smaller accumulated
After register length L+1=3 metric selected
branch pattern begins to repeat
(1) (1)
1
(1)
(1)
(2)
1
(0)
24
How to end-up decoding?
In the previous example it was assumed that the register was finally
filled with zeros thus finding the minimum distance path
In practice with long code words zeroing requires feeding of long
sequence of zeros to the end of the message bits: this wastes channel
capacity & introduces delay
To avoid this path memory truncation is applied:
– Trace all the surviving paths to the
depth where they merge
– Figure right shows a common point
at a memory depth J
– J is a random variable whose applicable
magnitude shown in the figure (5L)
has been experimentally tested for
negligible error rate increase
– Note that this also introduces the
delay of 5L! J 5L stages of the trellis
25
Lessons learned
You understand the differences between cyclic codes and
convolutional codes
You can create state diagram for a convolutional encoder
You know how to construct convolutional encoder circuits
based on knowing the generator sequences
You can analyze code strengths based on known code
generation circuits / state diagrams or generator sequences
You understand how to realize maximum likelihood
convolutional decoding by using exhaustive search
You understand the principle of Viterbi decoding
26