Paper - 2011 - Efficient Linear Programming Decoding of HDPC Codes - Alex Yufit
Paper - 2011 - Efficient Linear Programming Decoding of HDPC Codes - Alex Yufit
3, MARCH 2011
Abstract—We propose several improvements for Linear Pro- matrix, which in turn leads to a relatively small set of LP
gramming (LP) decoding algorithms for High Density Parity constraints. High Density Parity Check (HDPC) codes are
Check (HDPC) codes. First, we use the automorphism groups characterized by a dense parity check matrix, which leads to
of a code to create parity check matrix diversity and to generate
valid cuts from redundant parity checks. Second, we propose an poor performance over the AWGN channel when using the
efficient mixed integer decoder utilizing the branch and bound decoder of [1] due to several reasons. First, the number of
method. We further enhance the proposed decoders by removing constraints depends exponentially on each parity check degree,
inactive constraints and by adapting the parity check matrix which results in a very large linear problem. Second, each
prior to decoding according to the channel observations. Based vertex of the fundamental polytope is the intersection of sev-
on simulation results the proposed decoders achieve near-ML
performance with reasonable complexity. eral hyper-planes, defined by the parity check constraints; thus
increasing the number of constraints leads to increasing the
Index Terms—Adaptive LP decoding, automorphism groups,
number of pseudocodewords, deteriorating the performance of
BCH codes, belief propagation, branch and bound, linear pro-
gramming, LP relaxation, pseudocodewords. the decoder.
In the works of Vontobel and Koetter [2], [4], [5] several
I. I NTRODUCTION LP relaxations approximating the codewords-polytope were
described, low complexity LP decoding algorithms for LDPC
iteratively converges to the same solution as [1] by adding by its parity check matrix H. A binary vector x of length
constraints and dropping some old constraints. A randomized n is a codeword iff x satisfies: 𝐻x = 0 in GF(2). In this
algorithm for finding row RPC cuts to tighten the relaxation paper we consider transmitting over Additive White Gaussian
is also presented in [3] as a way to improve the performance. Noise (AWGN) channel using BPSK modulation. Let y be
Nevertheless, finding efficient methods of constructing RPC the receiver input, then we can formulate the ML decoding
cuts was left open. problem in the following way:
In [11] Draper et al. have suggested a mixed integer decod- 𝑛
∑
ing for improving the performance of LP decoding over LDPC x𝑀𝐿 = 𝑎𝑟𝑔 𝑚𝑖𝑛 { 𝑐𝑖 𝑥𝑖 𝑠.𝑡. x ∈ 𝒞} (1)
codes. By adding integer constraints to the ALP decoder of [3], 𝑖=1
the proposed mixed integer decoder can obtain ML decoding
where 𝑐𝑖 = 𝑙𝑜𝑔(𝑃 𝑟(𝑦𝑖 ∣𝑥𝑖 = 0)/𝑃 𝑟(𝑦𝑖 ∣𝑥𝑖 = 1)) is the log
performance. However, the authors of [11] have claimed that
likelihood ratio of the 𝑖𝑡ℎ bit and x𝑀𝐿 is maximum likelihood
their method cannot be applied to HDPC codes since the
codeword. The vector c = [𝑐1 , . . . , 𝑐𝑛 ] is also called a cost
complexity becomes prohibitively large.
vector.
An LP decoding algorithm for HDPC codes with acceptable Each check node j in the Tanner graph [14] of a code
performance and complexity was proposed by Tanatmis et generates a local codeword polytope [1]. The local codeword
al. [12]. This new decoder, denoted as a New Separation polytope is defined by the following set of constraints:
Algorithm (NSA) performs significantly better then the LP
∑ ∑
decoder of [1], [3], [10] and [11] mainly due to a generation 𝑥𝑖 + (1 − 𝑥𝑖 ) ≥ 1, ∣𝑆∣ is odd (2)
of new RPC cuts. The algorithm generating RPC cuts is 𝑖∈𝑁 (𝑗)∖𝑆 𝑖∈𝑆
based on Gaussian elimination of the parity check matrix.
Recently, Tanatmis et al. have improved their decoder by where 𝑁 (𝑗) is a set of all the bit nodes connected to check
finding additional algorithms for generating RPC cuts [13]. node j and S is an odd subset of 𝑁 (𝑗). The constraints
The contribution of this paper is in providing better adap- (2) are also known as Forbidden Set (FS) inequalities or FS
tive LP decoding techniques for decoding HDPC codes. We constraints.
present several methods for eliminating fractional pseudocode- The decoder of [1] performs LP optimization over the
words, while adding tolerable complexity. First, we propose to fundamental polytope. When the LP solver returns an integer
improve the performance of the LP decoder by running several codeword it has the ML certificate property [1], which guar-
decoders in parallel, while each one using a different represen- antees that the decoded word is the ML-codeword. However,
tation of the parity check matrix. As an alternative, we propose when the solution contains fractional coefficients, the decoder
using several representations of the parity check matrix to returns an error. In [1] Feldman proposed adding RPC cuts
derive RPC cuts in order to tighten the relaxed polytope. The in order to tighten the relaxed polytope and thus improve the
variety of parity check matrix representations are produced by performance. However, no efficient way of finding such RPCs
columns’ permutations taken from the automorphism groups was given.
The number of FS constraints required to describe the
of the code. Second, a new variant of mixed integer decoding
fundamental
∑𝑚 𝑑𝑐 (𝑗)−1polytope in Feldman’s LP decoder [1] is
utilizing the branch and bound method is proposed. We further 𝑡ℎ
2 where 𝑑𝑐 (𝑗) is the order of the 𝑗 row of
enhanced the proposed decoders by utilizing ordered statistics 𝑗=1
H, and m is the number of rows in H. The complexity of the
for adapting the parity check matrix according to the channel
decoder of [1] grows exponentially with the density of the
observations prior to decoding. We also propose removing
parity check matrix; thus such a decoder is only applicable to
all inactive constraints after each decoding iteration, which
LDPC codes.
avoids the growth of the number of constraints and reduces the
The ALP decoder of [3] adaptively adds FS inequalities (2)
number of fractional pseudocodewords in the relaxed polytope.
and solves a set of compact LP problems rather than one large
Our decoding algorithms adaptively seek for constraints which
problem. In [3] it was shown that such an adaptive decoder
exclude fractional pseudocodewords from the feasible set. We
converges to the same solution as the original LP decoder
achieve a near-ML performance by solving several compact
of [1], but with much less complexity. For each check node
LP problems, rather than solving a single growing problem.
only a single constraint from the set S of (2) can generate a
The rest of the paper is organized as follows: We provide
cut. Furthermore, the maximal number of iterations required
some preliminaries in Section II. In Section III we introduce an
is bounded by the code length. These bounds make adaptive
improved adaptive LP decoder which makes use of the auto-
LP decoding efficient even for HDPC codes. However, the
morphism groups. In Section IV we present an adaptive branch
performance of [3] is poor when applied to HDPC codes.
and bound decoding algorithm. In Section V we propose
The NSA decoder proposed by Tanatmis et al. in [12] is also
two further enhancements for the adaptive LP decoder: parity
an adaptive LP decoder. In [12], additional auxiliary variables
check matrix adaptation and removing inactive constraints.
are used as indicators to detect the violated parity checks.
Simulation results are presented in section VI. Section VII
Each indicator variable corresponds to a row of the parity
concludes the paper.
check matrix and satisfies the following equality constraints:
LP problem of [12] contains only the constraints of (3). If of old constraints [3]. These modifications lead to the emer-
the decoder output is integral, the ML certificate property gence of some new pseudocodewords and the disappearance
guarantees that the output is the ML codeword. Otherwise, of some existing pseudocodewords. For each chosen parity
valid cuts are generated and added to the LP formulation check matrix, the adaptive LP decoder may converge to a
to eliminate the fractional pseudocodeword. A modified LP fractional pseudocodeword minimizing the objective function
problem is solved again repetitively until an integral solution in a vicinity of the received word and causing a decoding
is found or no more valid cuts could be generated. Valid cuts failure. Such a pseudocodeword may not exist in the relaxed
are generated by either FS inequalities of (2) or by generating polytope derived from a different parity check matrix.
RPC cuts. RPC cuts are generated based on the observation The parity check matrix diversity is a powerful tool to re-
that every fractional pseudocodeword is cut by an RPC whose duce the likelihood of decoding failures caused by converging
support contains only one fractional index. Such RPCs are to pseudocodewords. Algorithm 1 is an improved version of
generated by Gaussian elimination of the parity check matrix the NSA decoder provided in [12], and its performance gain
as described in [12]. will be demonstrated in section VI. For each received word,
the decoder tries to decode the word several times, each time
III. LP D ECODERS U TILIZING AUTOMORPHISM G ROUPS with a different representation of the parity check matrix. Each
Automorphism group of a linear block code 𝒞, 𝐴𝑢𝑡(𝒞), new representation is generated by permuting the columns of
is a group of all the coordinates’ permutations which send H with a random permutation taken from 𝐴𝑢𝑡(𝒞). The ML
𝒞 into itself [15]. For example, the automorphism group certificate property of the LP decoder is used as a stopping
of any cyclic code includes a cyclic shift permutation, as criterion when an integral solution is found. If none of the
well as other permutations which depend on the internal decoders returned a valid codeword, the algorithm returns a
structure of the code. For many widely used block codes the fractional solution with minimal Euclidian distance from the
automorphism groups are well known. Automorphism groups received word.
of some primitive binary BCH codes are provided in [16]. In the description of Algorithm 1 we used the following
Several BP decoders [17], [18], [19] exploit the knowledge notation: x = 𝑎𝑟𝑔 𝑚𝑖𝑛 {c𝑡 x 𝑠.𝑡. 𝐻} means run the
of the automorphism group of the code to generate alterna- NSA decoder which finds x that minimize c𝑡 x, subject to
tive representations of the parity check matrix by permuting using H as a parity check matrix representation. The worst
its columns with a permutation taken from 𝐴𝑢𝑡(𝒞). In BP case complexity of Algorithm 1 is higher than the worst
decoders, the choice of a parity check matrix has a crucial case complexity of the NSA decoder by a linear factor N,
influence on the Tanner Graph [14] of the code and thus while resulting in a major performance enhancement. One can
on the performance of the decoder. It is shown in [17], trade performance for complexity by controlling the maximal
[18], [19] that a BP decoder running on several H matrices number of NSA decoding attempts, N.
in parallel achieves significant performance improvement for
HDPC codes, though increases the computational complexity. B. LP Decoder Utilizing RPC Cuts Based On Alternative
We use the same approach with LP decoders based on Parity Check Matrix Representations
the following: First, an LP decoder is also susceptible to
As was stated earlier, alternative parity check matrix repre-
the choice of the parity check matrix [1] i.e. different parity
sentations can contribute to the search of new RPC valid cuts.
check matrices will have different relaxations of the codewords
Valid RPC cuts are generated from the rows of the permuted H.
polytope and thus different pseudocodewords. A systematic
Such an approach achieves tighter relaxation of the codeword
way for a-priori choosing a representation of the parity check
polytope and results in a superior performance. This approach
matrix that will yield the best decoding result for a given
is demonstrated in Algorithm based on the ALP decoder of
received word is still unknown; thus we propose to run several
[3].
LP decoders in parallel, each one using a different parity
The first stage of the proposed algorithm is to decode the
check matrix representation. We denote this concept: parity
received word using the ALP decoder of [3]. If the result is
check matrix diversity. Second, alternative representations of
integral, the ML certificate property holds and the optimal
a parity check matrix can be exploited to derive new RPC cuts
solution is returned. Otherwise, a new representation of the
when using adaptive LP decoders [3], [12]. When an adaptive
parity check matrix is generated. Such a new representation
decoder is unable to generate any more valid cuts, alternative
is achieved by permuting the columns of H by a random
representations of the parity check matrix can provide addi-
permutation taken from 𝐴𝑢𝑡(𝒞). The ALP decoder is used
tional RPCs to generate valid cuts in order to obtain a tighter
repetitively, each time with a different representation of H.
relaxation. Each of the two approaches presented above can
The decoder continuous until an integral solution is found or
be used with any LP decoder ([1], [3], [10] and [12]). In the
a maximal number of decoding attempts is reached. The major
following we propose using the first approach with the NSA
difference between the current approach and the approach
decoder of [12] and the second with the ALP decoder of [3].
presented in sub-section III-A is that the current approach
preserves constraints generated in pervious decoding attempts.
A. LP Decoder Utilizing Parity Check Matrix Diversity The worst case complexity of Algorithm 2 is higher by
The behavior of an adaptive LP decoder [3], [12] is difficult factor N than the worst case complexity of the ALP decoder
to analyze: At each stage, the relaxed polytope is modified of [3]. The parameter N is chosen as a trade-off between the
due to the addition of new constraints and possibly disposal desirable performance and complexity.
YUFIT et al.: EFFICIENT LINEAR PROGRAMMING DECODING OF HDPC CODES 761
Algorithm 1 NSA decoder of [12] with parity check matrix Algorithm 2 Improved ALP decoder of [3] using alternative
diversity H representations
Input: Input:
r - Received vector r - Received vector
c - Cost vector c - Cost vector
𝐻 - Parity check matrix 𝐻 - Parity check matrix
𝐴𝑢𝑡(𝒞) - Automorphism group of code 𝒞 𝐴𝑢𝑡(𝒞) - Automorphism group of code 𝒞
𝑁 - Maximal number of decoding attempts 𝑁 - Maximal number of decoding attempts
𝐶𝑆 - Set of initial constraints
Output:
x𝑜𝑝𝑡 - Returned optimal solution Output:
x𝑜𝑝𝑡 - Returned optimal solution
Init x𝑜𝑝𝑡 = 0, c = −r
for i in range 1 to N do Init c = −r, 𝐶𝑆 = {}
Run the NSA decoder of [12] : x𝑖 = for i in range 1 to N do
𝑎𝑟𝑔 𝑚𝑖𝑛 {c𝑡 x 𝑠.𝑡. 𝐻} Run the ALP decoder of [3] : x = 𝑎𝑟𝑔 𝑚𝑖𝑛 {c𝑡 x 𝑠.𝑡. 𝐻}
if x𝑖 is integral then with constraints set CS
x𝑜𝑝𝑡 = x𝑖 Add all new generated constraints to the set CS
Permute the result: x𝑜𝑝𝑡 = 𝜋𝑖 ⋅ . . . ⋅ 𝜋𝑖−1 (x𝑖 ) if x is integral then
Return x𝑜𝑝𝑡 break
Terminate end if
end if Choose a random permutation : 𝜋𝑖 ∈ 𝐴𝑢𝑡(𝒞)
Choose randomly a permutation : 𝜋𝑖 ∈ 𝐴𝑢𝑡(𝒞) Permute the columns of the parity check matrix H: 𝐻 =
Apply inverse permutation to the cost vector : c = 𝜋𝑖 (𝐻)
𝜋𝑖−1 (c) end for
end for Return x𝑜𝑝𝑡 = x
Permute the vectors: x𝑘 = 𝜋1 ⋅ . . . ⋅ 𝜋𝑘 (x𝑘 ), 𝑘 ∈ 1, . . . , 𝑁
Find a solution x𝑜𝑝𝑡 = 𝑎𝑟𝑔 𝑚𝑖𝑛 ∥x𝑘 − r∥, 𝑘 ∈ 1, . . . , 𝑁
Return x𝑜𝑝𝑡
lems and solves them using [12]. Each new problem is
equipped with a new equality constraint, forcing the most
unreliable fractional element to have a binary value. The most
Algorithm 1 and Algorithm 2 rely on the NSA decoder and unreliable fractional element is chosen using the following
the ALP decoder, respectively. Despite the performance issue, novel technique. The proposed decoder obtains a set of coun-
the main advantage of using an ALP decoder is that unlike ters - one for each element. At each iteration of the NSA,
an NSA decoder, the ALP decoder doesn’t employ Gaussian the counters of all the fractional elements are incremented.
elimination for the search of valid cuts. The use of Gaussian When the NSA outputs an error, the fractional element with
elimination may have a severe implication on the complexity the highest value is chosen as the most unreliable fractional
especially for large codes. In Algorithm 2 we employ H matrix element. The decoder adopts a depth-first tree searching ap-
permutations for the search of new valid cuts which has a proach of limited depth. At each node a different LP problem
complexity of only 𝑂(𝑛) since only the columns’ indices have is solved. The root of the tree (depth equals zero) is the
to be permuted. fractional solution (pseudocodeword) obtained by solving the
original problem using [12]. A node at depth i has i new
IV. A B RANCH AND B OUND LP-BASED A DAPTIVE equality constraints, and has two children at depth 𝑖 + 1. The
D ECODER proposed algorithm is described below as Algorithm 3 and
The branch-and-bound method, in the context of linear makes use of procedure 𝐵𝐵𝑑𝑒𝑐𝑜𝑑𝑒 presented just after it.
programming decoding, was first proposed in [20], as a The decoders of [1], [3] and [12] make use of the ML-
multistage LP decoding of LDPC codes. The proposed decoder certificate property as a stopping criterion. The new equality
is a suboptimal LP decoder, for which deeper depths lead to constraints may lead to infeasible solutions or integral solu-
a better performance at the cost of an increased complexity. It tions which are not the ML-codewords. In order to maintain
can be applied to both LDPC and HDPC codes, does not use the ML-certificate property, our decoder has three pruning
mixed integer programming, and requires less computations possibilities:
compared to [11]. It is able to refine the results of [12] 1) Pruning by infeasibility: If equality constraints vio-
by systematically eliminating the most unreliable fractional late the parity check matrix constraints, adding more
elements towards finding the ML-codeword. constraints (deeper search) will not make the solution
The proposed decoder initially tries to find the ML- feasible.
codeword by calling the NSA decoder. If a fractional pseu- 2) Pruning by bound: The smallest integral solution found
docodeword was returned and no new valid cuts could be so far is stored as an incumbent. The optimal cost value
found, our decoder recursively constructs two new LP prob- of the incumbent is stored as 𝐵𝑢𝑝𝑝𝑒𝑟−𝑏𝑜𝑢𝑛𝑑 . If the cost
762 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 59, NO. 3, MARCH 2011
Run the NSA decoder of [12] : x = 𝑎𝑟𝑔 𝑚𝑖𝑛 {c𝑡 x 𝑠.𝑡. 𝐻} Output:
if the output of [12] is integral then x𝑜𝑝𝑡 - Returned optimal solution
return x𝑜𝑝𝑡 = x 𝐵𝑢𝑝𝑝𝑒𝑟−𝑏𝑜𝑢𝑛𝑑 - Objective value upper bound
Terminate 𝑐𝑢𝑟𝑟𝑒𝑛𝑡_𝑑𝑒𝑝𝑡ℎ - Current tree depth
end if
Find the most unreliable fractional element 𝑥𝑖 and construct if 𝑐𝑢𝑟𝑟𝑒𝑛𝑡_𝑑𝑒𝑝𝑡ℎ > 𝐵𝐵𝑑𝑒𝑝𝑡ℎ then
two LP problems: return (x𝑜𝑝𝑡 , 𝐵𝑢𝑝𝑝𝑒𝑟−𝑏𝑜𝑢𝑛𝑑 , 𝑐𝑢𝑟𝑟𝑒𝑛𝑡_𝑑𝑒𝑝𝑡ℎ)
1) ORG_BB_zero: Original problem with a constraint end if
𝑥𝑖 = 0 Run the NSA decoder of [12] : x = 𝑎𝑟𝑔 𝑚𝑖𝑛 {c𝑡 x 𝑠.𝑡. 𝐻}
2) ORG_BB_one: Original problem with a constraint Denote the objective value as c*
𝑥𝑖 = 1 if no feasible solution is found then
Set 𝑐𝑢𝑟𝑟𝑒𝑛𝑡_𝑑𝑒𝑝𝑡ℎ = 1 return (x𝑜𝑝𝑡 , 𝐵𝑢𝑝𝑝𝑒𝑟−𝑏𝑜𝑢𝑛𝑑 , 𝑐𝑢𝑟𝑟𝑒𝑛𝑡_𝑑𝑒𝑝𝑡ℎ)
Set 𝐵𝑢𝑝𝑝𝑒𝑟−𝑏𝑜𝑢𝑛𝑑 = 𝑀 𝐴𝑋𝐼𝑁 𝑇 end if
Set x𝑜𝑝𝑡 = 0 if the solution is integral then
Call Procedure 1 with (x𝑜𝑝𝑡 , 𝐵𝑢𝑝𝑝𝑒𝑟−𝑏𝑜𝑢𝑛𝑑 , 𝑐𝑢𝑟𝑟𝑒𝑛𝑡_𝑑𝑒𝑝𝑡ℎ) = if 𝑐∗ < 𝐵𝑢𝑝𝑝𝑒𝑟−𝑏𝑜𝑢𝑛𝑑 then
𝐵𝐵_𝑑𝑒𝑐𝑜𝑑𝑒(𝑂𝑅𝐺_𝐵𝐵_𝑧𝑒𝑟𝑜, 𝑐𝑢𝑟𝑟𝑒𝑛𝑡_𝑑𝑒𝑝𝑡ℎ, 𝐵𝐵𝑑𝑒𝑝𝑡ℎ , 𝐵 𝑢𝑝𝑝𝑒𝑟−𝑏𝑜𝑢𝑛𝑑 = 𝑐∗
V. F URTHER E NHANCEMENTS Proposition 1. Removing all the inactive constraints from the
LP problem does not affect the optimal solution.
A. Adapting the Parity Check Matrix According to the Chan-
Proof: It is enough to prove that removing a single
nel Observations
inactive constraint does not affect the LP solution. Denote
The method of adapting the parity check matrix was pro- c = (𝑐1 , . . . 𝑐𝑛 ) as the cost vector, x = (𝑥1 , ⋅ ⋅ ⋅ 𝑥𝑛 ) as the
posed for the sum product algorithm by Jiang and Narayanan variables vector of the LP problem and A as an m by n
[21]. We utilize this technique in LP decoding. In [21] matrix constraints matrix which rows are {a𝑖 ∣𝑖 = 1 to m}, for which
adaptation is performed before each decoding iteration. Due each row represents an inequality constraint. Using the above
to the nature of LP decoders, for which no LLR refinement is notations, the LP problem is formulated as follows:
obtained, the matrix adaption is performed only once prior to
decoding. We perform Gaussian elimination, such that a sparse 𝑚𝑖𝑛(c𝑡 x) 𝑠.𝑡 𝐴x ≤ b (4)
submatrix that corresponds to the less reliable bits is created
prior to decoding. We show in the next section that adapting Without loss of generality, suppose P is the optimal solution
the parity check matrix prior to decoding can significantly of problem 4, and the row a1 corresponds to an inactive
improve both the performance and the complexity. The latter constraint i.e.
is achieved due to a faster convergence of the decoding
algorithm. In [3], Taghavi et al. have proven that a fractional a1 x < b (5)
solution of the LP decoder requires the existence of a cycle
in the Tanner graph over all fractional-valued nodes. Less Hyper-plane (5) divides the space into two sub-spaces as
reliable bits are more prone to become fractional due to their shown in Fig. 2. The sub-space on the right side of the hyper-
low weight in the objective function. By employing matrix plane is infeasible. After removing inactive constraint (5) from
adaptation, these cycles are pruned, and better performance is the LP problem and solving the new problem, new optimal
obtained. solution is found. Let Q be the optimal solution after removing
(5). We have to prove that P = Q. First Note that:
B. Removing Inactive Constraints
c𝑡 Q < c𝑡 P (6)
The performance and complexity of any adaptive LP de-
coder are strongly related to the number of constraints in the Also note that Q must lie on the right side of the hyper-plane
LP problem. An LP decoder based on the Simplex algorithm (4) i.e. in the infeasible part because otherwise Q would have
searches for an optimal solution over the vertices of the relaxed been the optimal solution of (4). Let l be the line connecting P
polytope. The number of codewords vertices of any relaxed and Q and let R be the crossing point of l and the hyper-plane
polytope is constant, while adding new constraints increases (5) such that
the total number of vertices. Obviously, all the new added
vertices are non-codewords fractional pseudocodewords. On R = 𝑡P + (1 − 𝑡)Q, 𝑡 ∈ (0, 1) (7)
764 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 59, NO. 3, MARCH 2011
an
a3
l Q
a2
R
a1
Note that R is a feasible point of (4). From (6) and (7) we (a) Frame Error Rate for BCH[63,36,11].
get
and higher maximal depths are used for Decoder B, at the simulations were taken on the same Linux machine (CentOS
cost of increasing the decoding time. Obviously this is not the release 4.5, 8G RAM), using the same LP solver (CPLEX
case for our enhancements which have a fixed improvement. 10.2).
Combining our algorithms and our enhancements is natural, Decoder A achieves 0.3 to 0.4dB performance gain in frame
since we can gain better performance with almost no affect error rate compared to the NSA decoder with almost the same
on the decoding time. complexity, for the tested BCH codes. Decoder B achieves
In Decoder A we use diversity of order 5 as a tradeoff 0.9 to 1dB gain, for BCH[63,36,11], 0.5 to 0.6dB gain for
between performance and complexity. Regardless of the choice BCH[63,39,9] and 0.6 to 0.7dB gain for BCH[127,99,9].
of the code, higher diversity order achieves only a minor Decoder C achieves 0.2 to 0.3dB gain with half the complexity
performance improvement at a cost of linear increase of of the NSA decoder. Due to the ML certificate property,
complexity. For Decoder B, the maximal depth, Dp, required the complexity of Decoders A and B decreases sharply with
to achieve near-ML performance varies for each code. In order the growth of SNR. At high SNR the complexity of all the
to have approximately a 0.1dB gap from the ML curve, Dp=4, presented algorithms is slightly inferior compares to the NSA
Dp=6 and Dp=8 are chosen for BCH[63,39,9], BCH[63,36,11] decoder, since parity check matrix adaptation requires at least
and BCH[127,99,9], respectively. Decoder C is an improved one Gaussian elimination of the parity check matrix.
NSA decoder with both superior performance and reduced
complexity. Simulation results for the mentioned decoders VII. C ONCLUSIONS
appear in Figs. 3, 4 and 5. As a benchmark, the curves of the Several improved adaptive LP decoders efficiently operating
NSA decoder and the ML decoder are plotted. Complexity is on HDPC codes were presented. A new LP decoder based
estimated as an average run-time of a decoded word, while all on parity check matrix diversity was presented. This decoder
766 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 59, NO. 3, MARCH 2011
outperforms the NSA decoder by more than 0.4dB. A new [15] F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting
branch and bound decoder was proposed, achieving perfor- Codes. Amsterdam, Holland: North-Holland, 1977.
[16] C. C. Lu and L. R. Welch, “On automorphism groups of binary primitive
mance within 0.1dB of the optimal ML decoder. Two further bch codes," in Proc. IEEE International Symp. Inf. Theory, Trondheim,
enhancements were proposed: adaptation of the parity check Norway, June 1994, p. 51.
matrix prior to decoding and disposing inactive constraints. [17] T. R. Halford and K. M. Chugg, “Random redundant iterative soft-in
soft-out decoding," IEEE Trans. Commun., vol. 56, no. 4, pp. 513-517,
Both enhancements were shown to achieve both performance Apr. 2008.
gain and complexity reduction. The benefit of each algorithm [18] T. Hehn, J. B. Huber, S. Leander, and O. Milenkovich, “Multiple-bases
was examined, as well as its performance versus its complex- belief-propagation for decoding of short block codes," in Proc. IEEE
International Symp. Inf. Theory, Nice, France, 2007.
ity. Based on the presented simulation results, a decoder that [19] I. Dimnik and Y. Be’ery, “Improved random redundant iterative HDPC
satisfies a given performance and latency requirements can be decoding," IEEE Trans. Commun., vol. 57, no. 7, pp. 1982-1985, July
chosen among the presented techniques. 2009.
[20] K. Yang, X. Wang, and J. Feldman, “Non-linear programming ap-
proaches to decoding low-density parity check codes," IEEE J. Sel.
Areas Commun., vol. 24, no. 8, pp. 1603-1613, Aug. 2006.
R EFERENCES
[21] J. Jiang and K. R. Narayanan, “Iterative soft decision decoding of Reed-
[1] J. Feldman, M. J. Wainwright, and D. R. Karger, “Using linear program- Solomon codes based on adaptive parity check matrices," IEEE Trans.
ming to decode binary linear codes," IEEE Trans. Inf. Theory, vol. 51, Inf. Theory, vol. 52, no. 8, pp. 3746-3756, Jan. 2006.
no. 1, pp. 954-972, Jan. 2005.
[2] P. O. Vontobel and R. Koetter, “Graph-cover decoding and finite length Alex Yufit was born in the former Soviet Union in
analysis of message-passing iterative decoding of LDPC codes," 2005. 1981 and immigrated to Israel in 1991. He received
[Online]. Available: http://www.arxiv.org/abs/cs.IT/0512078 his B.Sc. degree in electrical engineering from the
[3] M. H. Taghavi, A. Shokrollahi, and P. H. Siegel, “Efficient im- Technion, the Israel Institute of Technology, Haifa,
plementation of linear programming decoding," IEEE Trans. Inf. in 2004. He is currently a system architect in Cer-
Theory, Dec. 2008, submitted for publication. [Online]. Available: agon networks, Tel Aviv, Israel and at the same
http://arxiv.org/abs/0902.0657 time he is working on his M.Sc. degree in electrical
[4] P. O. Vontobel and R. Koetter, “Towards low-complexity linear- engineering at the Tel Aviv University. His fields
programming decoding," in Proc. 4th Intern. Conf. Turbo Codes of interest include digital communications, error
Related Topics, Munic, Germany, Apr. 2006. [Online]. Available: correcting codes and iterative decoding.
http://arxiv.org/abs/cs.IT/0602088
[5] P. O. Vontobel and R. Koetter, “Bounds on the threshold of lin-
ear programming decoding," in Proc. IEEE Inf. Theory Workshop Asi Lifshitz was born in Israel in 1975. He received
(ITW 2006), Punta del Este, Uruguay, Mar. 2006. [Online]. Available: the B.Sc. degree in electrical engineering, from Tel
http://arxiv.org/abs/cs/0602087v1 Aviv University, Israel, in 2005. He is currently
[6] D. Burshtein, “Iterative approximate linear programming decoding of pursuing his M.Sc. degree in Electrical Engineering
LDPC codes with linear complexity," IEEE Trans. Inf. Theory, vol. 55, at Tel Aviv University. He is an Asic engineer at
no. 11, pp. 4835-4859, Nov. 2009. CopperGate Communications, Tel Aviv, Israel. His
[7] A. G. Dimakis, A. A. Gohari, and M. J. Wainwright, “Guessing facets: fields of interests include System-on-Chip (SoC)
polytope structure and improved LP decoding," IEEE Trans. Inf. Theory, simulations, digital communications, error correct-
vol. 55, no. 8, pp. 3479-3487, Aug. 2009. ing codes and iterative decoding algorithms.
[8] M. Chertkov and M. Stepnov, “An efficient pseudo-codeword-search
algorithm for linear programming decoding of LDPC codes," IEEE
Trans. Inf. Theory, vol. 54, no. 4, pp. 1514-1520, Apr. 2008. Yair Be’ery was born in Israel in 1956. He received
[9] M. Chertkov and M. Stepnov, “Pseudo-codeword landscape," in Proc. the B.Sc. (Summa Cum Laude), M.Sc. (Summa
IEEE Intern. Symp. Inf., Nice, France, June 2007. Cum Laude), and Ph.D. degrees, all in electrical
[10] K. Yang, X. Wang, and J. Feldman, “A new linear programming engineering, from Tel Aviv University, Israel, in
approach to decoding linear block codes," IEEE Trans. Inf. Theory, 1979, 1979, and 1985, respectively.
vol. 54, no. 3, pp. 1061-1072, Mar. 2008. He is currently a Professor in the Department of
[11] C. S. Draper, J. S. Yedidia, and Y. Wang, “ML decoding via mixed- Electrical Engineering - Systems, Tel Aviv Univer-
integer adaptive linear programming," in Proc. IEEE International Symp. sity, where he has been since 1985. He served as the
Inf. Theory (ISIT), Nice, France, June 2007. Chair of the Department during the years 1999-2003.
[12] A. Tanatmis, S. Ruzika, H. W. Hamacher, M. Punekar, F. Kienle, and He is the recipient of the 1984 Eliyahu Golomb
N. When, “A separation algorithm for improved LP decoding of linear Award from the Israeli Ministry of Defense, the
block codes," 2008. [Online]. Available: http://arxiv.org/abs/0812.2559 1986 Rothschild Fellowship for postdoctoral studies at Rensselaer Polytechnic
[13] A. Tanatmis, S. Ruzika, H. W. Hamacher, M. Punekar, F. Kienle, Institute, Troy, NY, and of the 1992 Electronic Industry Award in Israel.
and N. When, “Valid inequlities for binary linear codes," in IEEE His research interests include digital communications, error control coding,
International Symp. Inf. Theory, Seoul, Korea, July 2009. turbo codes and iterative decoding, combined coding and modulation, VLSI
[14] N. Wiberg, “Codes and decoding on general graphs," Ph.D. dissertation, architectures and algorithms for systolic and multi-cores arrays.
Linkoping University, Linkoping, Sweden, 1996.