5CS3-ITC-Unit-V @zammers
5CS3-ITC-Unit-V @zammers
Convolution Code
Department of Computer Science and Engineering
Convolutional codes differ from block codes in that the encoder
contains memory and the n encoder outputs at any time unit
depend not only on the k inputs but also on m previous input
blocks.
An (n, k, m) convolutional code can be implemented with a k-
input, n-output linear sequential circuit with input memory m.
Typically, n and k are small integers with k<n, but the memory order m
must be made large to achieve low error probabilities.
In the important special case when k=1, the information sequence
is not divided into blocks and can be processed continuously.
Convolutional codes were first introduced by Elias in 1955 as an
alternative to block codes.
2
Shortly thereafter, Wozencraft proposed sequential decoding as an
efficient decoding scheme for convolutional codes, and
experimental studies soon began to appear.
In 1963, Massey proposed a less efficient but simpler-to-
implement decoding method called threshold decoding.
Then in 1967, Viterbi proposed a maximum likelihood decoding
scheme that was relatively easy to implement for cods with small
memory orders.
This scheme, called Viterbi decoding,
decoding together with improved
versions of sequential decoding, led to the application of
convolutional codes to deep-space
space and satellite communication in
early 1970s.
3
A convolutional code is generated by passing the information
sequence to be transmitted through a linear finite-state shift register.
In general, the shift register consists of K (k-bit) stages and n linear
algebraic function generators.
4
Convolutional codes
k = number of bits shifted into the encoder at one time
k=1 is usually used!!
n = number of encoder output bits corresponding to the k
information bits
Rc = k/n = code rate
K = constraint length, encoder memory.
Each encoded bit is a function of the present input bits and their
past ones.
Note that the definition of constraint length here is the same as
that of Shu Lin’s, while the shift register’s representation is
different.
5
Example 1:
Consider the binary convolutional encoder with constraint
length K=3, k=1, and n=3.
The generators are: g1=[100], g2=[101], and g3=[111].
The generators are more conveniently given in octal form
as (4,5,7).
6
Example 2:
Consider a rate 2/3 convolutional encoder.
The generators are: g1=[1011], g2=[1101], and g3=[1010].
In octal form, these generator are (13, 15, 12).
7
There are three alternative methods that are
often used to describe a convolutional code:
Tree Diagram
Trellis Diagram
State Diagram
8
Tree diagram
Note that the tree diagram in the
right repeats itself after the third
stage.
This is consistent with the fact that
the constraint length K=3.
The output sequence at each stage is
determined by the input bit and the
two previous input bits.
In other words, we may sat that the
3-bit output sequence for each input
bit is determined by the input bit and
the four possible states of the shift Tree diagram for rate 1/3,
register, denoted as a=00, b=01, K=3 convolutional code.
c=10, and d=11. 9
Trellis diagram
11
Example: K=2, k=2, n=3 convolutional code
Trellis diagram
12
Example: K=2, k=2, n=3 convolutional code
State diagram
13
In general, we state that a rate kk/n, constraint length K,
convolutional code is characterized by 2k branches emanating
from each node of the tree diagram.
The trellis and the state diagrams each have 2k(K-1) possible states.
There are 2k branches entering each state and 2k branches leaving
each state.
14
Example: A (2, 1, 3) binary convolutional codes:
18
Example 10.1
Let the information sequence u = (1 0 1 1 1). Then the
output sequences are
v(1) (1 0 1 1 1) (1 0 1 1) (1 0 0 0 0 0 0 1)
v(2) (1 0 1 1 1) (1 1 1 1) (1 1 0 1 1 1 0 1)
and the code word is
v (1 1, 0 1, 0 0, 0 1, 0 1, 0 1, 0 0,1 1).
19
A convolutional encoder generates n encoded bits for each k
information bits, and R = k/n is called the code rate.
For an k·L finite length information sequence, the corresponding
code word has length n(L + m), where the final n·m outputs are
generated after the last nonzero information block has entered the
encoder.
Viewing a convolutional code as a linear block code with
generator matrix G,, the block code rate is given by kL/n(L + m),
the ratio of the number of information bits to the length of the code
word.
If L » m, then L/(L + m) ≈ 1, and the block code rate and
convolutional code are approximately equal .
20
If L were small, however, the ratio kL/n(L + m), which is the
effective rate of information transmission, would be reduced
below the code rate by a fractional amount
k n kL n(L m) m
k n L m
called the fractional rate loss.
To keep the fractional rate loss small, L is always assumed to be
much larger than m.
Example 10.5
For a (2,1,3) convolutional codes, L=5 and the fractional rate loss
is 3/8=37.5%. However, if the length of the information sequence
is L=1000, the fractional rate loss is only 3/1003=0.3%.
21
• Input stream broken into m segments of ko symbols
each.
• Each segment – Information frame.
• Encoder – (Memory + logic circuit.)
• m memory to store m recent information frames
at a time. Total mko information symbols.
• At each input, logic circuit computescodeword.
• For each information frames (k o symbols), weget a
codeword frame (no symbols ).
• Same information frame may not generatesame
code word frame. Why?
Constraint Length v = mko
ko ko ko ko ko ko ko
Information Frame
Logic no
Codeword Frame
Encoder
+ +
0 1
1 11
00
0
1
1
10
4 Nodes are 4 States. Code rate is½.
In trellis diagram, follow upper or lower branch to
next node if input is 0 or1.
1.
Traverse to next state of node with this input,
writing code on thebranch.
branch.
Continues…
Complete the diagram.
00
00
11
10
01
11
States
00 00 00 00
00
11 11
11
10 00
01
01
10
10
11
01
States
00 00 00 00 00 00
00
11 11
11 11
10 00
01 00
01 10
10 10
10
11
01
States
Any information sequence i0, i1, i2, i 3 , …can be
expressed in terms of delay element D as
I(D) = i 0 + i1 D + i 2 D2 + i 3 D3 + i4D4 + …
10100011 will be 1+ D2 + D 6 + D7
• Similarly, encoder can also beexpressed as
polynomial of D.
• Using previous problem, ko =1, n o = 2,
– first bit of code g 11 (D)= D 2 +D+1
– Second bit of code g 12 (D)= D 2 +1
– G(D) = [g i j (D)]= [D 2 +D+1 D 2 +1]
cj(D) = ∑ i il(D)gl,j(D)
C(D) = I(D)G(D)
i b a
+
k 0 = 1, n0 = 2, rate = ½.
G(D) = [1 D 4 +1]
Called systematic convolution encoder as k 0 bits of
code are same as data.
n1
k2 n2
k1 n3
+ +
• k 0 = 2, n0 = 3, rate = 2/3.
• G(D) = g11(D) g112(D) g13(D)
• g21(D) g222(D) g23(D)
= 1 0 D 3 +D+1
0 1 0
Wordlength k = k 0 maxi,j{deg g ij (D) + 1].
Blocklength n = n0 maxi,j{deg g ij (D) + 1].
Constraint length
v = ∑ maxi,j{deg g ij (D)].
Parity Check matrix H(D) is (n 0 -k 0 ) by n0 matrix
of polynomials which satisfies
satisfies-
G(D) H(D)- 1 = 0
Syndrome polynomial s(D) is (n 0 -k 0 ) component
row vector
s(D) = v(D) H(D)- 1
Systematic Encoder has G(D) = [I | P(D)]
Where I is k 0 by ko identity matrix and P(D) is ko by (
n 0 -k 0 ) parity check polynomial given by
H(D) = [-P(D)T | I ]
Where I is ( n 0 -k 0 ) by ( n 0 -k 0 ) identity matrix.