III LDPC Code Tutorial1
III LDPC Code Tutorial1
Block Code Fundamentals (contd) the correspondence (mapping) u c is thus naturally written as
c = u0 g0 + L + uk 1g k 1
in matrix form, this is c = u G where
g0 g1 G= M g k 1
is the so-called generator matrix for C
k =n
clearly, there are 2k data words u = [u0, u1, , uk-1] and 2k corresponding codewords c = [c0, c1, , cn-1] in the code C .
{g i }
(after possible column swapping which permutes the order of the bits in the code words) the null space C of the subspace C has dimension n-k and is spanned by n-k linearly independent vectors h0 , h1 ,L , hn k 1 since each hi C, we must have for any c C that
c hiT = 0, i
so that
cH T = 0
if and only if c C.
Block Code Fundamentals (contd) suppose c has w 1s (i.e., the Hamming weight of c, wH(c ) = w) and the locations of those 1s are L1, L2, , Lw then the computation c H T = 0 effectively adds w rows of HT, rows L1, L2, , Lw, to obtain the vector 0 one important consequence of this fact is that the minimum distance dmin (= minimum weight wmin) of C is exactly the minimum number of rows of HT which can be added together to obtain 0
T = H
1 1 1 0 1 0 0
1 1 0 1 KKK 0 1 0
1 0 1 1 0 0 1
we can see that no two rows sum to 0 , but row 0 + row 1 + row 6 = 0
dmin = 3
Low-Density Parity-Check Codes note that the parity-check matrix H is so called because it performs m = n-k separate parity checks on a received word y = c + e Example. With HT as given above, the n-k = 3 parity checks implied by
? T = 0 yH
are
y0 y0 y0 + + + y1 y1 y2 + + + y2 y3 y3 + + + y4 y5 y6 ? ? ? 0 0 0
Definition. A low-density parity-check (LDPC) code is a linear block code for which the parity-check matrix H has a low density of 1s
Low-Density Parity-Check Codes (contd) Definition. A regular (n, k) LDPC code is a linear block code whose paritycheck matrix H contains exactly Wc 1's per column and exactly Wr = Wc(n/m) 1s per row, where Wc << m. Remarks
note multiplying both sides of Wc << m by n/m implies Wr << n. the code rate r = k/n can be computed from
r= Wr Wc Wr
= 1
Wc Wr
if H is low density, but the number of 1s per column or row is not constant,
the code is an irregular LDPC code
LDPC codes were invented by Robert Gallager of MIT in his PhD dissertation (1960). They received virtually no attention from the coding community until the mid-1990s.
one of the very few researchers who studied LDPC codes prior to the
recent resurgence is Michael Tanner of UC Santa Cruz
the two classes of nodes in a Tanner graph are the variable nodes (or bit
nodes) and the check nodes (or function nodes)
one may deduce from this that there are m = n-k check nodes and n
variable nodes
Tanner Graphs (contd) Example. (10, 5) block code with Wc = 2 and Wr = Wc(n/m) =4.
1 1 H = 0 0 0
1 0 1 0 0
1 0 0 1 0
1 0 0 0 1
0 1 1 0 0
0 1 0 1 0
0 1 0 0 1
0 0 1 1 0
check nodes
0 0 1 0 1
0 0 0 1 1
f0 f1 f2 f3 f4
variable nodes c0 c1 c2 c3 c4 c5 c6 c7 c8 c9
observe that nodes c0, c1, c2, and c3 are connected to node f0 in
accordance with the fact that in the first row of H, h00 = h01 = h02 = h03 = 1 (all others equal zero)
(for convenience, the first row and first col of H are assigned an index of 0) observe an analogous situation for f1, f2, f 3, and f4.
thus, as follows from the fact that c H T = 0 , the bit values connected to the same check node must sum to zero
note that the Tanner graph in this example is regular: each bit node is of degree 2 (has 2 edge connections and each check node is of degree 4) this is in accordance with the fact that Wc = 2 and Wr = 4 we also see from this why Wr = Wc(n/m) for regular LDPC codes:
(# v-nodes) x (v-node degree) = nWc must equal (#c-nodes) x (c-node degree) = mWr
Tanner Graphs (contd) Definition. A cycle of length l in a Tanner graph is a path comprising l edges which closes back on itself
the Tanner graph in the above example possesses a length-6 cycle as made evident by the 6 bold edges in the figure
Definition. The girth of a Tanner graph is the minimum cycle length of the graph
the shortest possible cycle in a bipartite graph is clearly a length-4 cycle length-4 cycles manifest themselves in the H matrix as four the corners of a submatrix of H: a r L s r 1 H= L s 1 a b L
1s that lie on
b
1 1
r H =s
a b c
1 1 1 1 t 1 1
clearly, the most obvious path to the construction of an LDPC code is via
the construction of a low-density parity-check matrix with prescribed properties
a number of design techniques exist in the literature, and we list a few: Gallager codes (semi-random construction) MacKay codes (semi-random construction) irregular and constrained-irregular LDPC codes (Richardson and Urbanke, Jin and McEliece, Yang and Ryan, Jones and Wesel, ...) finite geometry-based LDPC codes (Kou, Lin, Fossorier) combinatorial LDPC codes (Vasic et al.) LDPC codes based on array codes (Fan)
The H matrix for a Gallager code has the general form: H1 H 2 H = ... HWc
where H1 is p x pWr and has row weight Wr , and the submatrices Hi are column-permuted versions of H1.
Note H has column weight Wc and row weight Wr . The permutations must be chosen s. t. length-4 (and higher, if possible)
cycles are avoided and the minimum distance of the code is large.
Codes designs are often performed via computer search. Also see the
2nd edition of Lin and Costello (Prentice-Hall, 2004).
each bit node is associated with a code bit; each check node is
associated with a subcode whose length is equal to the degree of the node.
H1
Check nodes:
degree n1
degree n2
degree nm interleaver
Bit nodes:
1 0 0 0 1 1 1 0 1 0 1 0 1 1 0 0 1 1 1 0 1
Subcode nodes
... ...
... ...
Construction of LDPC Codes (contd) MacKay Codes following MacKay (1999), we list ways to semi-randomly generate sparse matrices H in order of increasing algorithm complexity (but not necessarily improved performance)
1. H generated by starting from an all-zero matrix and randomly inverting Wc not necessarily distinct bits in each column (the resulting LDPC code will be irregular) 2. H generated by randomly creating weight-Wc columns 3. H generated with weight-Wc columns and (as near as possible) uniform row weight 4. H generated with weight-Wc columns, weight-Wr rows, and no two columns having overlap greater than one 5. H generated as in (4), plus short cycles are avoided 6. H generated as in (5), plus H = [H1 H2] is constrained so that H2 is invertible (or at least H is full rank)
see http://www.inference.phy.cam.ac.uk/mackay/ for MacKays large library of codes
Construction of LDPC Codes (contd) MacKay Codes (contd) frequently an H matrix is obtained that is not full rank
the codes are turbo-like and are simple to design and understand
(albeit, they are appropriate only for low rates)
qk bits k bits repeat block q times repeat
permute
1 1 D
accumulate
for q = 3,
G = [I k
Ik
I k ] A
where
1 1 L 1 1 1 A= O M 1
for q = 3,
I k H T = A 1 1 I k
Ik Ik
note since A
1 , A-1 1 D so that 1 D
1 1 1 1 1 = A 1 O O 1 1
3k-bit codeword
graphical representation:
r1 r2 a a a
rk
a a a
non-systematic version:
codeword is parity word and rate is k/p, p > k
systematic version:
codeword is data word concatenated with parity word, rate is k/(k+p)
IRA encoder
G
kxp
1 1 D
1xp
Construction of LDPC Codes (contd) Extended Irregular Repeat Accumulate (eIRA) Codes
H is matrix is given below, where the column weight of H1 is > 2 note encoding may be performed directly from the H matrix by
recursively solving for the parity bits
H =
H1
1 1 1 1 1
1 ... 1 1 1
A-T
This matrix holds for both eIRA codes and IRA codes. What is different
is the size of H1T. eIRA: k x (n-k); IRA: k x p, p > k since it is a G matrix
can easily show that G = [ I H1TA ], from which encoder below follows always systematic appropriate for high code rates
eIRA encoder
M
k x (n-k)
1 1 D
1 x (n-k)
M = H1T
0 0 k 1 simplified 2( k 1) H = 0 0 ( j 1)( k 1)
I
0 1 2
0 2 4
...
j 1 ( j 1)2
( j 1)(k 1) 0 k 1 2( k 1)
where I is the identity matrix and is a p-by-p left- (or right-) cyclic shift of the identity matrix I by one position (p a prime integer), and 0 =I , and
1 = zero matrix .
Example, p = 5:
1 1 = 1 1 1
or
1 1 = 1 1 1
Delete the first column of the above H matrix to remove length-4 cycles We obtain the new H matrix: 0 ... ... 0 0 0 1 ... 1 0 1 0 0 ... 1 2 k H = .. ... 2 4 2k . 1 .. .. 0 0 ... ... ... 1 ... ... 1 0 j 1 2( j 2) ... .. ( j 1)k
the code rate is k/(j+k), the right j x k submatrix Hi corresponds to the information bits note that we may now encode using the H matrix by solving for the parity bits recursively: efficient encoding (more on this below)
Construction of LDPC Codes (contd) Codes Based on Finite Geometries and Combinatorics
Please see: Shu Lin and Daniel Costello, Error Control Coding, Prentice-Hall,
2004.
The papers by Shu Lin and his colleagues. The papers by Bane Vasic and his colleagues. The papers by Steve Weller and his colleagues.
Construction of LDPC Codes (contd) Codes Designed Using Density Evolution and Related Techniques
Please see: the papers by Richardson and Urbanke (IEEE Trans. Inf. Thy, Feb.
2001)
Encoding
~ ~ H = [PT M I ]
from which the systematic form of the generator matrix is obtained:
although this is more complex than it appears for capacity-approaching LDPC codes (n large)
Encoding (contd) Example. Consider a (10000, 5000) linear block code. Then G = [I M P ] is 5000 x 10000 and P is 5000 x 5000. We may assume that the density of ones in P is ~ 0.5.
there are ~ 0.5(5000)2 = 12.5 x 106 ones in P ~ 12.5 x 106 addition (XOR) operations are required to
encode one codeword
Encoding (contd)
Richard and Urbanke (2001) have proposed a lower complexity (linear in the code length) encoding technique based on the H matrix (not to be discussed here) an alternative approach to simplified encoding is to design the codes via algebraic, geometric, or combinatoric methods such structured codes are often cyclic or quasi-cyclic and lend themselves to simple encoders based on shift-register circuits since they are simultaneously LDPC codes, the same decoding algorithms apply often these structured codes lack freedom in the choice of code rate and length an alternative to structured LDPC codes are the constrained irregular codes of Jin and McEliece and Yang and Ryan (also called irregular repeat-accumulate (IRA) and extended IRA codes) -- more on this later
Selected Results
the papers from which these plots were taken are listed in the reference
section at the end of the note set
we indicate the paper each plot is taken from to ensure proper credit is
given (references are listed at the end of Part 2).
our discussions above favored regular LDPC codes for their simplicity, although we gave examples of irregular LDPC codes recall an LDPC code is irregular if the number of 1s per column of H and/or the number of 1s per row of H varies in terms of the Tanner graph, this means that the v-node degree and/or the c-node degree is allowed to vary (the degree of a node is the number of edges connected to it) a number of researchers have examined the optimal degree distribution among nodes:
- MacKay, Trans. Comm., October 1999 - Luby, et al., Trans. IT, February 2001 - Richardson, et al., Trans. IT, February 2001 - Chung, et al., Comm. Letters, February 2001
the results have been spectacular, with performance surpassing the best turbo codes
10
10
10
10
10
Pb Pb Pb Pb
10
Maximum 100 iterations 2.6 2.8 3 3.2 Eb/N0 (dB) 3.4 3.6 3.8 4
2.4
10
10
10
10
Pe (Probability of Error)
10
10
10
10
10
10
2.5
3 E /N (dB) b 0
3.5
4.5
June 2005
Decoding Overview in addition to presenting LDPC codes in his seminal work in 1960, Gallager also provided a decoding algorithm that is effectively optimal since that time, other researchers have independently discovered that algorithm and related algorithms, albeit sometimes for different applications the algorithm iteratively computes the distributions of variables in graph-based models and comes under different names, depending on the context: sum-product algorithm min-sum algorithm (approximation) forward-backward algorithm, BCJR algorithm (trellis-based graphical models) belief-propagation algorithm, message-passing algorithm (machine learning, AI, Bayesian networks)
the iterative decoding algorithm for turbo codes has been shown by McEliece (1998) and others to be a specific instance of the sum-product/beliefpropagation algorithm the sum-product," "belief propagation," and "message passing" all seem to be commonly used for the algorithm applied to the decoding of LDPC codes
Example: Distributed Soldier Counting A. Soldiers in a line. Counting rule: Each soldier receives a number from his right (left), adds one for himself, and passes the sum to his left (right). Total number of soldiers = (incoming number) + (outgoing number)
B. Soldiers in a Y Formation Counting rule: The message that soldier X passes to soldier Y is the sum of all incoming messages, plus one for soldier X, minus soldier Ys message I X Y = =
Z n( X )
I Z X IY X + I X IZ X + I X
Z n( X ) \ Y
= extrinsic information + intrinsic information Total number of soldiers = (message soldier X passes to soldier Y) + (message soldier Y passes to soldier X)
C. Formation Contains a Cycle The situation is untenable: No viable counting strategy exists; there is also a positive feedback effect within the cycle and the count tends to infinity. Conclusion: message-passing decoding cannot be optimal when the codes graph contains a cycle
The Turbo Principle Applied to LDPC Decoding the concept of extrinsic information is helpful in the understanding of the sumproduct/message-passing algorithm (the messages to be passed are extrinsic information) we envision Tanner graph edges as information-flow pathways to be followed in the iterative computation of various probabilistic quantities this is similar to (a generalization of) the use of trellis branches as paths in the Viterbi algorithm implementation of maximum-likelihood sequence detection/decoding
The arrows indicate the situation for x0 f 2 All of the information that x0 possesses is sent to node f2, except for the information that node f2 already possesses (extrinsic information) In one half-iteration of the decoding algorithm, such computations, xi f j , are made for all v-node/c-node pairs.
In the other half-iteration, messages are passed in the opposite direction: from c-nodes to v-nodes, f j xi . Consider the subgraph corresponding to the other half-iteration
f0 the information passed concerns Pr(check equation f0 is satisfied)
x0
x1
x2
x4
The arrows indicate the situation for f0 x4 node f0 passes all (extrinsic) information it has available to it to each of the bit nodes xi, excluding the information the receiving node already possesses. only information consistent with c0 + c1 + c2 + c4 = 0 is sent
10
Probability-Domain Decoder much like optimal (MAP) symbol-by-symbol decoding of trellis codes, we are interested in computing the a posteriori probability (APP) that a given bit in c equals one, given the received word y and the fact that c must satisfy some constraints without loss of generality, let us focus on the decoding of bit ci ; thus we are interested in computing
Pr (ci = 1 y , Si )
where Si is the event that the bits in c satisfy the Wc parity-check equations involving ci
11
later we will extend this to the more numerically stable computation of the logAPP ratio, also call the log-likelihood ratio (LLR):
Pr (ci = 0 y , S i ) log Pr (c = 1 y , S ) i i
Lemma 1 (Gallager)
(*)
The probability that a contains an odd number of 1s is one minus this value:
1 1 m (1 2 p k ) 2 2 k =1
proof: Induction on m.
12
Notation
Vj
13
Notation (cont'd)
= probability that
xi yi
14
Notation (cont'd)
rji(b)
= =
message to be passed from node fj to node xi the probability of the jth check equation being satisfied given bit ci = b and the other bits have separable distribution given by qij j j
{ }
fj
rji(b) qij(b) xi
=========
15
r ji (0) =
r ji (1) =
1 1 (1 2qij (1)) 2 2 i V \i j
further, observe that, assuming independence of the { rji(b)}, qij (0) = (1 Pi ) r ji (0)
j Ci \ j
qij (1) = Pi
j Ci \ j
r ji (1)
16
as indicated above, the algorithm iterates back and forth between {q ij } and
{r ji }; we already know how to pass the messages qij (b) and r ji (b) around
before we give the iterative decoding algorithm, we will need the following
result
(with x { 1})
17
Sum-Product Algorithm Summary - Probability Domain (perform looping below i, j for which hij = 1)
ci=0
(0) initialize:
1 1+ e 1
2 yi / 2
1+ e
2 yi / 2
(1) r ji (0) =
1 1 + (1 2qij (1)) 2 2 iV
j \i
xi
fj
r ji (1) = 1 r ji (0)
rji(b)
18
j Ci \ j
r ji (0)
fj qij(b)
j Ci \ j
r ji (1)
xi yi
where the constants K ij are chosen to ensure qij (0) + qij (1) = 1 (3) Compute
Qi (0)
= Ki (1 Pi ) r ji (0)
jCi
Qi (1)
= Ki Pi r ji (1)
jCi
19
(4) i
20
Remarks (a) if the graph corresponding to the H matrix contains no cycles (i.e., is a tree), then Qi (0) and Qi (1) will converge to the true a posteriori probabilities for ci as the number of iterations tends to (b) (for good LDPC codes) the algorithm is able to detect an uncorrected codeword with near-unity probability (step (4)), unlike turbo codes [MacKay] (c) this algorithm is applicable to the binary symmetric channel where
21
as with the probability-domain Viterbi and BCJR algorithms, the probabilitydomain message-passing algorithm suffers because 1) multiplications are involved (additions are less costly to implement) 2) many multiplications of probabilities are involved which could become numerically unstable (imagine a very long code with 50-100 iterations)
thus, as with the Viterbi and BCJR algorithms, a log-domain version of the
message-passing algorithm is to be preferred
22
L(r ji )
log
r ji (0) r ji (1) qij (0) qij (1) Qij (0) Qij (1)
L(qij )
log
L(Qi ) log
23
( )
24
2r ji (0) = 1 + 1 2r ji (1) =
from (*) on the previous page
iV j \ i
iV j \ i
(**)
25
the problem with these expressions is that we are still left with a product we can remedy this as follows (Gallager):
rewrite L(q ij ) as
L(qij ) = ij ij
where
ij ij
then (**) on the previous page can be written as 1 tanh L(rij ) 2 1 ij tanh ij 2 i i
26
we then have
1 L(r ji ) = ij 2 tanh 1 tanh ij 6 74 i 4 8 2 i
log 1 log
( x)
27
( ( x) ) = log
=x
45o line
28
for step (2), we simply divide the qij (0) eqn by the q ij (1) eqn and take the log
of both sides to obtain
L(qij ) = L(ci ) +
j Ci / j
L(r ji )
29
Sum-Product Algorithm Summary - Log Domain (perform looping below i, j for which hij = 1) (0) initialize:
(1)
fj
m (c )
m1( v )
x0
(v) 2
( m3v )
x1
x2
xi
30
where
ij
ij ( x) log tanh( x / 2)
= log ex +1 e x 1
(2)
L(qij ) = L(ci ) +
j Ci \ j
L(r j i )
f0 f1 fj
m1( c )
( m2c )
m (v )
xi
mi
31
(3)
(0)
if
=========
32
Improved Notation: Log-SPA Algorithm for LDPC Codes on the AWGN Channel 1. initialize the channel messages for each v-node via
(v )
( = m0 + mkc ) ;
k =1
d v 1
m1( c )
( m2c )
m (v )
x0
m0
f0
( )
m(c) m1( v )
x0
(v) 2
( m3v )
where k
(v )
and k
(v )
x1
x2
x4
33
4. compute M
(v )
= m0 + mk
k =1
dv
(c )
(hence c );
5. if cH T = 0 OR (# iterations = max_iterations ) OR (other stopping rule), STOP,
else go to step 2.
34
Comments 1. The order of the steps in the above algorithm summary is different than the summary given earlier in part to show that equivalent variations on the algorithm exist 2. In fact, the following ordering of the steps saves some computations (with a modification to Step 2): Step 1 Step 4 Step 5
( Step 2' This step is modified as: m (v ) = M (v ) mdc )
v
35
Example
consider an (8,4) product code (dmin=4) composed of a (3, 2) single parity check code (dmin=2) along rows and columns:
c0 c3 c6 c1 c4 c7 c2 c5
1 0 H = 1 0
1 0 0 1
1 0 0 0
0 1 1 0
0 1 0 1
0 1 0 0
0 0 1 0
0 0 0 1
36
x0
x1
x2
x3
x4
x5
x6
x7
note that the code is neither low-density nor regular, but it will suffice to demonstrate the decoding algorithm
37
1 0 1
c = 0 1 1 xi = (1)ci x = + 1 1 1 1 1 1 1
y0 y = y3 y6
y1 y4 y7
y2 y5
+.2 = +.6
+.2 +.5
_.9 1.1
.4 1.2
38
L(qij ) = L(ci ) = 2 yi / 2
39
40
consider the update equation for L( r ji ) in the log-domain decoder: L(r ji ) = ij ( ij ) i i notice now the shape of (x):
(x)
we may conclude that the term corresponding to the smallest ji in the above
summation dominates so that
41
min ij ij ~ i i
( )
min ij
i
the min-sum algorithm is thus simply the log-domain algorithm with step (1)
replaced by
(1)
L(r ji ) =
iR j \ i
ij min ij
iR j \ i
42
Example
1.5 y = .9
.8
.7 .5 1.1 .4 1.2
{L(ci )}i7= 0
43
cH T = 0
========
STOP
44
In a code's Tanner graph, edges leading from bit nodes to check nodes
indicate that the sum of those bit must equal zero
= L L bg bh b j bk L L L g Lh L j Lk L
45
where
see W. E. Ryan, "An Introduction to LDPC Codes," in CRC Handbook for Coding and Signal Processing for Recording Systems (B. Vasic, ed.) CRC Press, to be published in 2004. (http://www.ece.arizona.edu/~ryan/)
46
Wc = Wr = 64 c = 0.5
47
Wc = 4 Wr = 32 c = 0.5
48
By plotting the min-sum LLR values against the "exact" SPA values (same
data and noise), we see that the min-sum LLR values are generally too optimistic (too large, where large implies better reliability).
49
By attenuating the min-sum values, the resulting LLR values are sometimes
optimistic and sometimes pessimistic (relative to the SPA), but a better balance is struck, hence the improvement over the min-sum algorithm.
(1)
L ( r ji ) = A
i R j \ i
i j min i j
i R j \ i
50
It appears from the plot above that there might be some advantage to turning
off the attenuation factor after 5 iterations because the values become generally pessimistic on the 6th iteration. We have not simulated this modification to the MSa.x algorithm.
Also, an attenuation factor of 0.8 is better than the factor of 0.5 shown here,
but clearly the factor of 0.5 is to be preferred for practical reasons.
Below we present some simulation results for the following algorithms: SPA = sum-product algorithm MS = min-sum algorithm MSc = min-sum with a correction term MSa = min-sum with an attenuator
51
Pb and Pcw curves for MS, MSa.5(min-sum with attenuator 0.5), MSc.5 (min-sum with correction term 0.5), and SPA, all with 50 decoding iterations.
52
Repeat of the above, but with 10 iterations and without the min-sum curves.
53
Repeat of the above, but with 5 iterations and without the min-sum curves.
54
Iterative Decoding on the Packet Erasure Channel An analogous algorithm of course applies for the binary erasure channel (BEC) or the burst erasure channel (BuEC).
Code Review:
(a) received codeword
variable nodes
0 1101 1 1001 2 0001 3 ???? 4 ???? 5 ???? 6 ???? 7 0111 8 1111 9 0010
(c) iteration 2
variable nodes
1101
1101
check nodes
C0 C1 C2 C3 C4
check nodes
C0 C1 C2 C3 C4
check nodes
C0 C1 C2 C3 C4
10
55
REFERENCES (FOR PARTS I AND II) Primary references used to write these notes: R. Gallager, "Low-density parity-check codes," IRE Trans. Information Theory, pp. 21-28, Jan. 1962. D. MacKay, "Good error correcting codes based on very sparse matrices," IEEE Trans. Information Theory, pp. 399431, March 1999. J. Fan, Constrained coding and soft iterative decoding for storage, Ph.D. dissertation, Stanford University, December 1999. (See also his Kluwer monograph.) Secondary references used to write these notes: R. M. Tanner, "A recursive approach to low complexity codes," IEEE Trans. Inf Theory, pp. 533-547, Sept. 1981. J. Hagenauer, et al., "Iterative decoding of binary block and convolutional codes," IEEE Trans. Inf Theory, pp. 429-445, March 1996. M. Fossorier, et al. "Reduced complexity iterative decoding of low-density parity-check codes based on belief propagation," IEEE Trans. Comm., pp. 673-680, May 1999. T. Richardson, et al. "Efficient encoding of low-density parity-check codes," IEEE Trans. Inf Theory, Feb. 200 1. D. MacKay, et al. "Comparison of constructions of irregular Gallager codes," IEEE Trans. Comm., October 1999.
56
T. Richardson, et al. "Design of capacity-approaching irregular low-density parity-check codes," IEEE Trans. Inf Theory, Feb. 2001. M. Luby et al. "Improved low-density parity check codes using irregular graphs," IEEE Trans. Inf. Theory, Feb. 2001. S.-Y. Chung, et al. "On the design of low-density parity-check codes within 0.0045 dB of the Shannon limit," IEEE Comm. Letters, pp. 58-60, Feb. 2001. Y. Kou, S. Lin, and M. Fossorier, "Low density parity check codes based on finite geometries: A rediscovery and more," submitted to IEEE Trans. Inf. Theory. R. Lucas, M. Fossorier, Y. Kou, and S. Lin, "Iterative decoding of one-step majority-logic decodable codes based on belief propagation," IEEE Trans. Comm., pp. 931-937, June 2000. M. Yang, W. E. Ryan and Y. Li, "Design of Efficiently Encodable Moderate-Length High-Rate Irregular LDPC Codes," IEEE Trans. Comm., April 2004. W. E. Ryan, "An Introduction to LDPC Codes," in CRC Handbook for Coding and Signal Processing for Recording Systems (B. Vasic, ed.) CRC Press, to be published in 2004. (http://www.ece.arizona.edu/~ryan/)