0% found this document useful (0 votes)
34 views46 pages

5CS3-ITC-Unit-V @zammers

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views46 pages

5CS3-ITC-Unit-V @zammers

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Unit – 5

Convolution Code
Department of Computer Science and Engineering
Convolutional codes differ from block codes in that the encoder
contains memory and the n encoder outputs at any time unit
depend not only on the k inputs but also on m previous input
blocks.
An (n, k, m) convolutional code can be implemented with a k-
input, n-output linear sequential circuit with input memory m.
Typically, n and k are small integers with k<n, but the memory order m
must be made large to achieve low error probabilities.
In the important special case when k=1, the information sequence
is not divided into blocks and can be processed continuously.
Convolutional codes were first introduced by Elias in 1955 as an
alternative to block codes.

2
Shortly thereafter, Wozencraft proposed sequential decoding as an
efficient decoding scheme for convolutional codes, and
experimental studies soon began to appear.
In 1963, Massey proposed a less efficient but simpler-to-
implement decoding method called threshold decoding.
Then in 1967, Viterbi proposed a maximum likelihood decoding
scheme that was relatively easy to implement for cods with small
memory orders.
This scheme, called Viterbi decoding,
decoding together with improved
versions of sequential decoding, led to the application of
convolutional codes to deep-space
space and satellite communication in
early 1970s.

3
A convolutional code is generated by passing the information
sequence to be transmitted through a linear finite-state shift register.
In general, the shift register consists of K (k-bit) stages and n linear
algebraic function generators.

4
Convolutional codes
k = number of bits shifted into the encoder at one time
k=1 is usually used!!
n = number of encoder output bits corresponding to the k
information bits
Rc = k/n = code rate
K = constraint length, encoder memory.
Each encoded bit is a function of the present input bits and their
past ones.
Note that the definition of constraint length here is the same as
that of Shu Lin’s, while the shift register’s representation is
different.

5
Example 1:
Consider the binary convolutional encoder with constraint
length K=3, k=1, and n=3.
The generators are: g1=[100], g2=[101], and g3=[111].
The generators are more conveniently given in octal form
as (4,5,7).

6
Example 2:
Consider a rate 2/3 convolutional encoder.
The generators are: g1=[1011], g2=[1101], and g3=[1010].
In octal form, these generator are (13, 15, 12).

7
 There are three alternative methods that are
often used to describe a convolutional code:
 Tree Diagram

 Trellis Diagram

 State Diagram

8
Tree diagram
Note that the tree diagram in the
right repeats itself after the third
stage.
This is consistent with the fact that
the constraint length K=3.
The output sequence at each stage is
determined by the input bit and the
two previous input bits.
In other words, we may sat that the
3-bit output sequence for each input
bit is determined by the input bit and
the four possible states of the shift Tree diagram for rate 1/3,
register, denoted as a=00, b=01, K=3 convolutional code.
c=10, and d=11. 9
Trellis diagram

Tree diagram for rate 1/3, K=3 convolutional code.


10
xample: K=2, k=2, n=3 convolutional code
Tree diagram

11
Example: K=2, k=2, n=3 convolutional code
Trellis diagram

12
Example: K=2, k=2, n=3 convolutional code
State diagram

13
In general, we state that a rate kk/n, constraint length K,
convolutional code is characterized by 2k branches emanating
from each node of the tree diagram.
The trellis and the state diagrams each have 2k(K-1) possible states.
There are 2k branches entering each state and 2k branches leaving
each state.

14
Example: A (2, 1, 3) binary convolutional codes:

the encoder consists of an m=


= 3-stage shift register together with
n=2 modulo-22 adders and a multiplexer for serializing the
encoder outputs.
The mod-2
2 adders can be implemented as EXCLUSIVE-OR
EXCLUSIVE gates.
Since mod-2 addition is a linear operation, the encoder is a
linear feedforward shift register.
All convolutional encoders can be implemented using a linear
feedforward shift register of this type.
15
The information sequence u =(
=(u0, u1, u2, …) enters the encoder
one bit at a time.
Since the encoder is a linear system, the two encoder output
sequence v (1)  (0(1) ,1(1) , 2(1) ,…
…) and v (2)  (0(2) ,1(2) , 2(2) ,…)can
be obtained as the convolution of the input sequence u with the
two encoder “impulse response.”
The impulse responses are obtained by letting u =(1 0 0 …) and
observing the two output sequence.
Since the encoder has an m-time time unit memory, the impulse
responses can last at most m+1 time units, and are written as :
g(1)  (g (1)
0
, g (1)
1
,…, g (1)
m
)
g (2)  (g 0(2) , g 1(2) ,…, g (2)
m
)
18
The encoder of the binary (2, 1, 3) code is
g(1)  (1 0 1 1)
g (2)  (1 1 1 1)
The impulse response g(1) and g(2) are called the generator
sequences of the code.
The encoding equations can now be written as
v(1)  u  g(1)
v ( 2)  u  g ( 2)
where * denotes discrete convolution and all operations are mod-2.
The convolution operation implies that for all l ≥ 0,
m
l
( j)
  l i i
u g ( j)
 u g
l 0
( j)
 ul 1
g
1 1
( j)
…  u g
lm m
( j)
, j  1,2,.
i0
where ul i  0 for all l  i.
17
Hence, for the encoder of the binary (2,1,3) code,
l(1)  u l  u l2  u l3
l(2)  ul  u l 1  u l 2  u l3
as can easily be verified by direct inspection of the encoding
circuit.
After encoding, the two output sequences are multiplexed into a
signal sequence, called the code word, for transmission over the
channel.
The code word is given by

v  (0(1)0(2) ,1(1)1(2) , 2(1) 2(2) ,…).

18
Example 10.1
Let the information sequence u = (1 0 1 1 1). Then the
output sequences are
v(1)  (1 0 1 1 1)  (1 0 1 1)  (1 0 0 0 0 0 0 1)
v(2)  (1 0 1 1 1)  (1 1 1 1)  (1 1 0 1 1 1 0 1)
and the code word is
v  (1 1, 0 1, 0 0, 0 1, 0 1, 0 1, 0 0,1 1).

19
A convolutional encoder generates n encoded bits for each k
information bits, and R = k/n is called the code rate.
For an k·L finite length information sequence, the corresponding
code word has length n(L + m), where the final n·m outputs are
generated after the last nonzero information block has entered the
encoder.
Viewing a convolutional code as a linear block code with
generator matrix G,, the block code rate is given by kL/n(L + m),
the ratio of the number of information bits to the length of the code
word.
If L » m, then L/(L + m) ≈ 1, and the block code rate and
convolutional code are approximately equal .

20
If L were small, however, the ratio kL/n(L + m), which is the
effective rate of information transmission, would be reduced
below the code rate by a fractional amount
k n  kL n(L  m) m

k n L m
called the fractional rate loss.
To keep the fractional rate loss small, L is always assumed to be
much larger than m.
Example 10.5
For a (2,1,3) convolutional codes, L=5 and the fractional rate loss
is 3/8=37.5%. However, if the length of the information sequence
is L=1000, the fractional rate loss is only 3/1003=0.3%.

21
• Input stream broken into m segments of ko symbols
each.
• Each segment – Information frame.
• Encoder – (Memory + logic circuit.)
• m memory to store m recent information frames
at a time. Total mko information symbols.
• At each input, logic circuit computescodeword.
• For each information frames (k o symbols), weget a
codeword frame (no symbols ).
• Same information frame may not generatesame
code word frame. Why?
Constraint Length v = mko

ko ko ko ko ko ko ko

Information Frame

Logic no

Codeword Frame

Encoder

•Constraint length of a shift register encoderis


number of symbols it can store in its memory.
• v = mko
 Wordlength of shift register encoder k = (m+1)k o
 Blocklength of shift register encoder n = (m+1)n o
 (no,ko)code tree rate = k o /n o = k / n .
 A (no,ko) tree code that is linear, time invariant and
has finite Wordlength k = (m+1)k o is called (n,K)
Convolution codes.
 A (no,ko) tree code that is time invariant andhas
finite Wordlength k = (m+1)k o is called (n,K)
Sliding Block codes. Not linear.
+

Input Shift Register Encoded output

+ +

• One bit input converts to two bits code.


• ko =1, n o = 2, Block length = (m+1)n o = 6
• Constraint length = 2 , Code rate = ½
Incomin Current state of
g Bit Encoder Outgoing Bits
0 0 0 0 0
1 0 0 1 1
0 0 1
1 0 1
0 1 0
1 1 0
0 1 1
1 1 1
Incoming Current state of
Bit Encoder Outgoing Bits
0 0 0 0 0
1 0 0 1 1
0 0 1 1 1
1 0 1 0 0
0 1 0 0 1
1 1 0 1 0
0 1 1 1 0
1 1 1 0 1
 Same bit gets coded differently depending upon
previous bits.
 Make state diagram of Encoder
Encoder.
 2 bits encoder – 4 states.
01
0
0

0 1
1 11
00
0

1
1

10
 4 Nodes are 4 States. Code rate is½.
 In trellis diagram, follow upper or lower branch to
next node if input is 0 or1.
1.
 Traverse to next state of node with this input,
writing code on thebranch.
branch.
 Continues…
 Complete the diagram.
00
00

11
10

01

11

States
00 00 00 00
00
11 11
11
10 00
01
01
10
10
11
01

States
00 00 00 00 00 00
00
11 11
11 11
10 00
01 00
01 10
10 10
10
11
01

States
Any information sequence i0, i1, i2, i 3 , …can be
expressed in terms of delay element D as
I(D) = i 0 + i1 D + i 2 D2 + i 3 D3 + i4D4 + …
 10100011 will be 1+ D2 + D 6 + D7
• Similarly, encoder can also beexpressed as
polynomial of D.
• Using previous problem, ko =1, n o = 2,
– first bit of code g 11 (D)= D 2 +D+1
– Second bit of code g 12 (D)= D 2 +1
– G(D) = [g i j (D)]= [D 2 +D+1 D 2 +1]
 cj(D) = ∑ i il(D)gl,j(D)
C(D) = I(D)G(D)
i b a

+
 k 0 = 1, n0 = 2, rate = ½.
 G(D) = [1 D 4 +1]
Called systematic convolution encoder as k 0 bits of
code are same as data.
n1
k2 n2
k1 n3

+ +
• k 0 = 2, n0 = 3, rate = 2/3.
• G(D) = g11(D) g112(D) g13(D)
• g21(D) g222(D) g23(D)

= 1 0 D 3 +D+1
0 1 0
 Wordlength k = k 0 maxi,j{deg g ij (D) + 1].
 Blocklength n = n0 maxi,j{deg g ij (D) + 1].
 Constraint length
v = ∑ maxi,j{deg g ij (D)].
 Parity Check matrix H(D) is (n 0 -k 0 ) by n0 matrix
of polynomials which satisfies
satisfies-
G(D) H(D)- 1 = 0
 Syndrome polynomial s(D) is (n 0 -k 0 ) component
row vector
s(D) = v(D) H(D)- 1
 Systematic Encoder has G(D) = [I | P(D)]
 Where I is k 0 by ko identity matrix and P(D) is ko by (
n 0 -k 0 ) parity check polynomial given by
H(D) = [-P(D)T | I ]
 Where I is ( n 0 -k 0 ) by ( n 0 -k 0 ) identity matrix.

 Also G(D) H(D)T = 0


 Viterbi algorithm is utilized to decode the convolutional
codes. Again the decoding can be done in two approaches.
One approach is called hard decision decoding which uses
Hamming distance as a metric to perform the decoding
operation, whereas, the soft decision decoding uses
Euclidean distance as a metric
metric.
 As stated in one of the previous posts the soft decision
decoding improves the performance of the code to a
significant extent compared to the hard decision decoding.
 Viterbi
Algorithm (VA) decoding involves three
stages, namely,
1) Branch Metric Calculation
2) Path Metric Calculation
3) Trace back operation
 The pair of received bits (for (n=2) , if (n=3) then
we call it triplets, etc.,) are compared with the
corresponding branches in the trellis and the
distance metrics are calculated
calculated. For hard decision
decoding, Hamming distances are calculated.
Suppose if the received pair of bits are ’11’ and the
hamming distance to {’00’,’ ’,’01’,’10’,’11’} outputs
of the trellis are 2,1,1,0 respectively
respectively.
 Path metrics are calculated using a procedure called
ACS (Add-Compare-Select).. This procedure is
repeated for every encoder state
state.
 Add – for a given state, we know two states on the
previous step which can move to this state, and the
output bit pairs that correspond to these transitions. To
calculate new path metrics, we add the previous path
metrics with the corresponding branch metrics.
 Compare, select – we now have two paths, ending in a
given state. One of them (with greater metric) is
dropped.
 Track back operation is needed in hardware that
generally has memory limitations and if the transmitted
message is of greater length compared to the memory
available. It is also required to maintain a constant
throughput at the output of the decoder
decoder.
 Using soft decision decoding is recommended for
Viterbi decoders, since it can give a gain of about 2 dB
(that is, a system with a soft decision decoder can use 2
dB less transmitting power than a system with a hard
decision decoder with the same error probability).

You might also like