0% found this document useful (0 votes)
47 views

S.72-3320 Advanced Digital Communication (4 CR) : Convolutional Codes

The document discusses convolutional codes, which are error-correcting codes that operate on code streams rather than blocks. Convolutional codes are defined by parameters (n,k,L) where n is the number of output bits, k is the number of input bits, and L is the memory depth or number of shift register stages. Convolutional codes can be represented using generator sequences or matrices, as well as trellises and state diagrams. Maximum likelihood decoding of convolutional codes involves finding the most likely path through the trellis based on calculated path metrics and branch distances. An example shows how to calculate path metrics for an exhaustive Viterbi decoder to determine the most likely transmitted codewords and message bits.

Uploaded by

Balagovind Balu
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views

S.72-3320 Advanced Digital Communication (4 CR) : Convolutional Codes

The document discusses convolutional codes, which are error-correcting codes that operate on code streams rather than blocks. Convolutional codes are defined by parameters (n,k,L) where n is the number of output bits, k is the number of input bits, and L is the memory depth or number of shift register stages. Convolutional codes can be represented using generator sequences or matrices, as well as trellises and state diagrams. Maximum likelihood decoding of convolutional codes involves finding the most likely path through the trellis based on calculated path metrics and branch distances. An example shows how to calculate path metrics for an exhaustive Viterbi decoder to determine the most likely transmitted codewords and message bits.

Uploaded by

Balagovind Balu
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 26

S.

72-3320 Advanced Digital Communication (4 cr)

Convolutional Codes

Targets today

Why to apply convolutional coding?


Defining convolutional codes
Practical encoding circuits
Defining quality of convolutional codes
Decoding principles
Viterbi decoding

Timo O. Korhonen, HUT Communication Laboratory

Convolutional encoding
k bits

(n,k,L)
(n,k,L)
encoder
encoder

n bits

input bit

message bits
encoded bits
n(L+1) output bits

Convolutional codes are applied in applications that require good


performance with low implementation complexity. They operate on
code streams (not in blocks)
Convolution codes have memory that utilizes previous bits to encode or
decode following bits (block codes are memoryless)
Convolutional codes are denoted by (n,k,L), where L is code (or
encoder) Memory depth (number of register stages)
Constraint length C=n(L+1) is defined as the number of encoded bits a
message bit can influence to
Convolutional codes achieve good performance by expanding their
memory depth

Timo O. Korhonen, HUT Communication Laboratory

Example: Convolutional encoder, k = 1, n = 2


x ' j m j 2 m j1 m j

x '' j m j2 m j
memory
depth L
= number
of states

xout x '1 x ''1 x '2 x ''2 x '3 x ''3 ...

(n,k,L) = (2,1,2) encoder

Convolutional encoder is a finite state machine (FSM) processing


information bits in a serial manner
Thus the generated code is a function of input and the state of the FSM
In this (n,k,L) = (2,1,2) encoder each message bit influences a span of
C= n(L+1)=6 successive output bits = constraint length C
Thus, for generation of n-bit output, we require in this example n shift
registers in k = 1 convolutional encoder

Timo O. Korhonen, HUT Communication Laboratory

Example: (n,k,L)=(3,2,1) Convolutional encoder

x ' j m j 3 m j 2 m j
x '' j m j 3 m j 1 m j
x ''' j m j 2 m j
Timo O. Korhonen, HUT Communication Laboratory

After each new block of k input bits


follows a transition into new state
Hence, from each input state
transition, 2k different output states
may follow
Each message bit influences a span
of C = n(L+1) = 3(1+1) = 6
successive output bits

Generator sequences

k bits

(n,k,L)
(n,k,L)
encoder
encoder

n bits

(n,k,L) Convolutional code can be described by the generator sequences


g (1) , g ( 2 ) ,...g ( n )that are the impulse responses for each coder n output
branches:

g (1)
0

g (1)2

g (1)m

(2,1, 2) encoder
g ( n ) [ g 0( n )

g1( n ) L

g m( n ) ]

(1)
g [1 0 11] Note that the generator sequence length
(2)
g [1111] exceeds register depth always by 1

Generator sequences specify convolutional code completely by the


associated generator matrix
Encoded convolution code is produced by matrix multiplication of the
input and the generator matrix

Timo O. Korhonen, HUT Communication Laboratory

Convolution point of view in encoding and


generator matrix

Encoder outputs are formed by modulo-2


discrete(1) convolutions:
(1)
(2)
(2)
( j)

xx((kk))yy((uukk))
xxyy((uu))
kA
kAxy xy

v u * g , v u * g ... v u * g ( j )

input bit

where u is the information sequence:

u (u0 , u1 ,L )

Therefore, the l:th bit of the j:th output branch is*

vl( j ) i 0 ul i gl( j ) ul g 0( j ) ul 1 g1( j ) ... ul m g m( j )


m

where m L 1, ul i @0, l i

Hence, for this circuit the following equations result,


(assume:
)

L2

ul 2

g 2(1)

(1)
3

ul 3

Timo O. Korhonen, HUT Communication Laboratory

j branches

n(L+1) output bits


(1)
g [1 0 11]
g ( 2 ) [111 1]

(1)
ul 2 ul 3
vl ul
( 2)
vl ul ul 1 ul 2 ul 3

encoder output:

v [v0(1) v0( 2 ) v1(1) v1( 2 ) v2(1) v2( 2 ) ...]

*note that u is reversed in time as in the definition of convolution top right

Example: Using generator matrix


(1)
g [1 0 11]
g ( 2 ) [111 1]

g (1)
g (02 ) g1(1) g1( 2 )
0

11
3
00 01
11 01
12
1 23
1 411 4 2 4 410 3
01
(1) ( 2 )
gm gm

vl( j ) ul g 0( j )
ul 1 g1( j )
... ul m g m( j )

ul m

Verify that you can obtain the result shown!


Timo O. Korhonen, HUT Communication Laboratory

Timo O. Korhonen, HUT Communication Laboratory

S.Lin, D.J. Costello: Error Control Coding, II ed, p. 456

Representing convolutional codes: Code tree


Number of braches
deviating from each node
equals 2k

x ' j 1 0 0

x '' j 1 0

(n,k,L) = (2,1,2) encoder

x ' j 0 1 0

x ' j m j 2 m j1 m j

x '' j m j2 m j

x '' j 0 0

xout x '1 x ''1 x '2 x ''2 x '3 x ''3 ...


m j 2 m j 1 0 1
This tells how one input bit
is transformed into two output bits
(initially register is all zero)

Timo O. Korhonen, HUT Communication Laboratory

x ' j 0 1 1

x '' j 0 1

10

Representing convolutional codes compactly:


code trellis and state diagram
Input state 1
indicated by dashed line

Code trellis

State diagram

Shift register states


Timo O. Korhonen, HUT Communication Laboratory

11

Inspecting state diagram: Structural properties of


convolutional codes

Each new block of k input bits causes a transition into new state
Hence there are 2k branches leaving each state
Assuming encoder zero initial state, encoded word for any input of k
bits can thus be obtained. For instance, below for u=(1 1 1 0 1),
encoded word v=(1 1, 1 0, 0 1, 0 1, 1 1, 1 0, 1 1, 1 1) is produced:
Verify that you obtain the same result!
Input state

- encoder state diagram for (n,k,L)=(2,1,2) code


- note that the number of states is 8 = 2 L+1 => L = 2 (two state bits)
Timo O. Korhonen, HUT Communication Laboratory

12

Code weight, path gain, and generating function

The state diagram can be modified to yield information on code distance


properties (= tells how good the code is to detect or correct errors)
Rules (example on the next slide):
(1) Split S0 into initial and final state, remove self-loop
(2) Label each branch by the branch gain Xi. Here i is the weight* of the n
encoded bits on that branch
(3) Each path connecting the initial state and the final state represents a
nonzero code word that diverges and re-emerges with S0 only once
The path gain is the product of the branch gains along a path, and the weight of
the associated code word is the power of X in the path gain
Code weigh distribution is obtained by using a weighted gain formula to
compute its generating function (input-output equation)

T ( X ) Ai X i
i

where Ai is the number of encoded words of weight i


*In linear codes, weight is the number of 1:s in the encoder output
Timo O. Korhonen, HUT Communication Laboratory

13

branch
gain

weight: 2

weight: 1

Example: The path representing the state


sequence S0S1S3S7S6S5S2S4S0 has the
path gain X2X1X1X1X2X1X2X2=X12
and the corresponding code word
has the weight of 12

T ( X ) Ai X i
i

X 6 3X 7 5X 8
11X 9 25 X 10 ....
Timo O. Korhonen, HUT Communication Laboratory

Where does these terms come from?

14

Distance properties of convolutional codes

Code strength is measured by the minimum free distance:

d free min d ( v ', v '') : u ' u ''

where v and v are the encoded words corresponding information


sequences u and u. Code can correct up to t d free / 2 errors.
The minimum free distance dfree denotes:

The minimum weight of all the paths in the state diagram that
diverge from and remerge with the all-zero state S 0

The lowest power of the code-generating function T(X)

T ( X ) Ai X i
i

X 6 3X 7 5X 8
11X 9 25 X 10 ....
d free 6
Timo O. Korhonen, HUT Communication Laboratory

* for derivation, see Carlsons, p. 583

Code gain*:
G kd / 2n R d / 2 1
c

free

free

15

Coding gain for some selected convolutional codes

Here is a table of some selected convolutional codes and their


code gains R dfree /2 expressed for hard decoding also by
C

10log ( Rc d / 2) dB
10

Timo O. Korhonen, HUT Communication Laboratory

free

16

Decoding of convolutional codes

Maximum likelihood decoding of convolutional codes means finding the


code branch in the code trellis that was most likely transmitted
Therefore maximum likelihood decoding is based on calculating code
Hamming distances for each branch potentially forming encoded word
Assume that the information symbols applied into an AWGN channel
are equally alike and independent
Lets denote by x encoded symbols (no errors) and by y received
(potentially erroneous) symbols: x x0 x1 x2 ...x j ... y y0 y1 ... y j ...
Decoder
received code: y
Probability to decode the symbols is then

non erroneous code:

p ( y , x) p ( y j | x j )
j 0

The most likely path through the trellis will maximize this metric.
Often ln() is taken from both sides, because probabilities are often

small numbers, yielding:

(=distance
calculation)
bit
decisions

ln p ( y , x) ln p( y j xmj )
j 1

(note this corresponds equavalently also the smallest Hamming distance)


Timo O. Korhonen, HUT Communication Laboratory

17

Example of exhaustive maximal likelihood detection

Assume a three bit message is transmitted and encoded by (2,1,2)


convolutional encoder. To clear the decoder, two zero-bits are appended
after message. Thus 5 bits are encoded resulting 10 bits of code. Assume
channel error probability is p = 0.1. After the channel 10,01,10,11,00 is
produced (including some errors). What comes after the decoder, e.g. what
was most likely the transmitted code and what were the respective message
bits?

a
b
states

c
d

Timo O. Korhonen, HUT Communication Laboratory

decoder outputs
if this path is selected

18

p ( y , x) p ( y j | x j )
j 0

ln p ( y , x) j0 ln p( y j | x j )

weight for prob. to


receive bit in-error

Timo O. Korhonen, HUT Communication Laboratory

errors

correct

19

correct:1+1+2+2+2=8;8 ( 0.11) 0.88


false:1+1+0+0+0=2;2 ( 2.30) 4.6
total path metric: 5.48

The largest metric, verify


that you get the same result!
Note also the Hamming distances!

Timo O. Korhonen, HUT Communication Laboratory

20

The Viterbi algorithm

Problem of optimum decoding is to find the minimum distance path


from the initial state back to the initial state (below from S0 to S0). The
minimum distance is one of the sums of all path metrics from S0 to S0

Exhaustive maximum likelihood


method must search all the paths
in phase trellis (2k paths emerging/
entering from 2 L+1 states for
an (n,k,L) code)
The Viterbi algorithm gets
improvement in computational
efficiency via concentrating into
survivor paths of the trellis

Timo O. Korhonen, HUT Communication Laboratory

21

The survivor path

Assume for simplicity a convolutional code with k=1, and thus up to 2k = 2


branches can enter each state in trellis diagram
Assume optimal path passes S. Metric comparison is done by adding the
metric of S1 and S2 to S. At the survivor path the accumulated metric is
naturally smaller (otherwise it could not be the optimum path)

For this reason the non-survived path can


be discarded -> all path alternatives need not
to be further considered
Note that in principle the whole transmitted
sequence must be received before decision.
However, in practice storing of states for
input length of 5L is quite adequate
L

2 nodes, determined
by memory depth

Timo O. Korhonen, HUT Communication Laboratory

2k branches enter each node

branch of larger
metric discarded

22

Example of using the Viterbi algorithm

Assume the received sequence is

y 01101111010001

and the (n,k,L)=(2,1,2) encoder shown below. Determine the Viterbi


decoded output sequence!

6 44states
7 4 48

(Note that for this encoder code rate is 1/2 and memory depth equals L = 2)
Timo O. Korhonen, HUT Communication Laboratory

23

The maximum likelihood path


After register length L+1=3
branch pattern begins to repeat

(1)

Smaller accumulated
metric selected

(1)

(1)
(1)
(2)

1
(0)

(Branch Hamming distances


in parenthesis)

First depth with two entries to the node

The decoded ML code sequence is 11 10 10 11 00 00 00 whose Hamming


distance to the received sequence is 4 and the respective decoded
sequence is 1 1 0 0 0 0 0 (why?). Note that this is the minimum distance path.
(Black circles denote the deleted branches, dashed lines: '1' was applied)
Timo O. Korhonen, HUT Communication Laboratory

24

How to end-up decoding?

In the previous example it was assumed that the register was finally
filled with zeros thus finding the minimum distance path
In practice with long code words zeroing requires feeding of long
sequence of zeros to the end of the message bits: this wastes channel
capacity & introduces delay
To avoid this path memory truncation is applied:
Trace all the surviving paths to the
depth where they merge
Figure right shows a common point
at a memory depth J
J is a random variable whose applicable
magnitude shown in the figure (5L)
has been experimentally tested for
negligible error rate increase
Note that this also introduces the
delay of 5L!

Timo O. Korhonen, HUT Communication Laboratory

J 5 L stages of the trellis

25

Lessons learned

You understand the differences between cyclic codes and


convolutional codes
You can create state diagram for a convolutional encoder
You know how to construct convolutional encoder circuits
based on knowing the generator sequences
You can analyze code strengths based on known code
generation circuits / state diagrams or generator sequences
You understand how to realize maximum likelihood
convolutional decoding by using exhaustive search
You understand the principle of Viterbi decoding

Timo O. Korhonen, HUT Communication Laboratory

26

You might also like