Error Probability Performance For BPSK Using Viterbi Algorithm
Error Probability Performance For BPSK Using Viterbi Algorithm
Update: For some reason, the blog is unable to display the article which discuss
both Convolutional coding and Viterbi decoding. As a work around, the article was
broken upto into two posts.
This post descrbes the Viterbi decoding algorithm for a simple Binary Convolutional
Code with rate 1/2, constraint length and having generator polynomial .
For more details on the Binary convolutional code, please refer to the post
Convolutional code
Viterbi algorithm
As explained in Chapter 5.1.4 of Digital Communications by John Proakis for optimal
decoding for a modulation scheme with memory (as is the case here), if there are coded bits,
we need to search from possible combinations. This becomes prohibitively complex as
becomes large. However, Mr. Andrew J Viterbi in his landmark paper Error bounds for
convolutional codes and an asymptotically optimum decoding algorithm, IEEE Transactions on
Information Theory 13(2):260269, April 1967 described a scheme for reducing the complexity
to more managable levels. Some of the key assumptions are as follows:
(a) As shown in Table 1 and Figure 2 (in the article Convoutional Code) any state can be
reached from only 2 possible previous state.
(b) Though we reach each state from 2 possible states, only one of the transition in valid. We can
find the transition which is more likely (based on the received coded bits) and ignore the other
transition.
(c) The errors in the received coded sequence are randomly distributed and the probability of
error is small.
Based on the above assumptions, the decoding scheme proceed as follows: Assume that there are
coded bits. Take two coded bits at a time for processing and compute Hamming distance,
Branch Metric, Path Metric and Survivor Path index for times. Let be the index
is the number of 1s in
is the number of 1s in
is the number of 1s in
is the number of 1s in
Note:
1. Typically, Convolutional coder always starts from State 00. The Viterbi decoder also assumes
the same.
2. For index = 1, branch metric for State 00 (from State 00) branch and State 10 (from State 00)
can only be computed. In this case, path metric for each state is equal to branch metric as the
other branch is not valid.
3. For index = 2, branch metric for State 00 (from State 00) branch, State 01 (from State 10),
State 10 (from State 00) and State 11 (from State 10) can only be computed. In this case too, path
metric for each state is equal to branch metric as the other branch is not valid.
4. Starting from index =3, each state has two branches and the need to do Add Compare and
Select arises.
5. Its possible that two branch metrices might have the same value. In that scenario, we can
chose one among the the branches and proceed.
Branch Metric and Path Metric computation for Viterbi decoder
Figure: Branch Metric and Path Metric computation for Viterbi decoder
(a) State 00 with output 00. The branch metric for this transition is,
(b) State 01 with output 11. The branch metric for this transition is,
The path metric for state 00 is chosen based which is minimum out of the two.
(d) State 11 with output 01. The branch metric for this transition is,
The path metric for state 01 is chosen based which is minimum out of the two.
(e) State 00 with output 11. The branch metric for this transition is,
(f) State 01 with output 00. The branch metric for this transition is,
The path metric for state 10 is chosen based which is minimum out of the two.
(g) State 10 with output 01. The branch metric for this transition is,
(h) State 11 with output 10. The branch metric for this transition is,
The path metric for state 11 is chosen based which is minimum out of the two.
Traceback Unit
Once the survivor path is computed times, the decoding algorithm can start trying to
estimate the input sequence. Thanks to presence of tail bits (additional zeros) , it is known
that the final state following Convolutional code is State 00.
So, start from the last computed survivor path at index for State 00. From the survivor
path, find the previous state corresponding to the current state. From the knowledge of current
state and previous state, the input sequence can be determined (Ref: Table 2 Input given
current state and previous state). Continue tracking back through the survivor path and
estimate the input sequence till index = 1.
ip, if previous
state
current state 00 01 10 11
00 0 0 x x
01 x x 0 0
10 1 1 x x
11 x x 1 1
Note: Since 2 coded bits are required for transmission of each data bit, the relation between
(b) Convolutionally encode them using rate -1/2, generator polynomial [7,5] octal code
(f) Counting the number of errors from the output of Viterbi decoder
Click here to download Matlab/Octave script for computing BER with Binary Convolutional
code and Viterbi decodig for BPSK modulation in AWGN channel
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% All rights reserved by Krishna Pillai, http://www.dsplog.com
% The file may not be re-distributed without explicit authorization
% from Krishna Pillai.
% Checked for proper operation with Octave Version 3.0.0
% Author : Krishna Pillai
% Email : [email protected]
% Version : 1.0
% Date : 14th December 2008
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
ref = [0 0 ; 0 1; 1 0 ; 1 1 ];
ipLUT = [ 0 0 0 0;...
0 0 0 0;...
1 1 0 0;...
0 0 1 1 ];
for yy = 1:length(Eb_N0_dB)
% Transmitter
ip = rand(1,N)>0.5; % generating 0,1 with equal probability
% Noise addition
y = s + 10^(-Ec_N0_dB(yy)/20)*n; % additive white gaussian noise
% Viterbi decoding
pathMetric = zeros(4,1); % path metric
survivorPath_v = zeros(4,length(y)/2); % survivor path
for ii = 1:length(y)/2
r = cipHat(2*ii-1:2*ii); % taking 2 coded bits
if (ii == 1) || (ii == 2)
else
% branch metric and path metric for state 0
bm1 = pathMetric(1,1) + hammingDist(1);
bm2 = pathMetric(2,1) + hammingDist(4);
[pathMetric_n(1,1) idx] = min([bm1,bm2]);
survivorPath(1,1) = idx;
end
pathMetric = pathMetric_n;
survivorPath_v(:,ii) = survivorPath;
end
end
close all
figure
semilogy(Eb_N0_dB,theoryBer,'bd-','LineWidth',2);
hold on
semilogy(Eb_N0_dB,simBer_Viterbi,'mp-','LineWidth',2);
axis([0 10 10^-5 0.5])
grid on
legend('theory - uncoded', 'simulation - Viterbi (rate-1/2, [7,5]_8)');
xlabel('Eb/No, dB');
ylabel('Bit Error Rate');
title('BER for BCC with Viterbi decoding for BPSK in AWGN');
Figure 3: BER plot for BPSK modulation in AWGN channel with Binary Convolutional
code and hard decision Viterbi decoding
Observations
1. For lower regions, the error rate with Viterbi decoding is higher uncoded bit error rate.
This is because, Viterbi decoder prefers the error to be randomly distributed for it work
effectively. At lower values, there are more chances of multiple received coded bits in errors,
and the Viterbi algorithm is unable to recover.
2. There can be further optimizations like using soft decision decoding, limiting the traceback
length etc. Those can be discussed in future posts.