ErrorCorrection JackWolf
ErrorCorrection JackWolf
CORRECTING CODES
Part 1
Jack Keil Wolf
ECE 154C
Spring 2008
Noisy Communications
• Noise in a communications channel can cause
errors in the transmission of binary digits.
• Transmit: 1 1 0 0 1 0 1 0 1 1 1 0 0 0 0 1 0 …
• Receive: 1 1 0 1 1 0 1 0 0 0 1 0 0 0 0 1 0 …
Sometimes one
or more of the
RNA symbols
Is changed.
Hopefully, the
AUG starts
resultant triplet
codon.
still decodes to
the same protein.
Rate = k / n
• Convolutional Codes
k binary digits n binary digits, n > k
0101...1 011010...11
Message Convolutional
Source Encoder
Rate = k / n
TYPES OF ECC
• Binary Codes
• Nonbinary Codes
C1 ⊕ C2 ⊕ C3 ⊕ C5 = 0 C2
C7 C5
C1 ⊕ C3 ⊕ C4 ⊕ C6 = 0 C1
The circles
C1 ⊕ C2 ⊕ C4 ⊕ C7 = 0 C4 C3
represent
the equations.
Parity Check Matrix: C6
1 1 1 0 1 0 0
1 0 1 1 0 1 0 There is an even number
1 1 0 1 0 0 1 of 1’s in each circle.
HAMMING (7,4) CODE: ENCODING
• Message: (C1 C2 C3 C4 ) = (0 1 1 0)
C2=1
C7 C7=1 C5=0 C5
C1=0
C4=0 C3=1
C6=1 C6
1
1 0 There is no error in right circle.
0 There is an error in bottom circle
0 1 There is no error in left circle.
The 6th digit is in error!
0
HAMMING (7,4) CODE: DECODING
1
1 0 There is an error in right circle.
1 There is no error in bottom circle
0 1 There is an error in left circle.
The 2nd digit is in error.
0
WRONG!!!
HAMMING (7,4) CODE: SYNDROME
DECODING
• Let R1 R2 R3 R4 R5 R6 R7 be the received block of
binary digits, possibly with errors.
R2
R1 ⊕ R2 ⊕ R3 ⊕ R5 = S1 R7 R5
R1 ⊕ R3 ⊕ R4 ⊕ R6 = S2 R1
R1 ⊕ R2 ⊕ R4 ⊕ R7 = S3 R4 R3
R6
• S1, S2 and S3 is called the syndrome.
HAMMING (7,4) CODE: SYNDROME
DECODING
• Resultant code word: 0 1 1 0 0 1 1
1 0 0 0 1 1 1
0 1 0 0 1 0 1
0 0 1 0 1 1 0
0 0 0 1 0 1 1
1 1 0 0 0 1 0
1 0 1 0 0 0 1
1 0 0 1 1 0 0
0 1 1 0 0 1 1
0 1 0 1 1 1 0
0 0 1 1 1 0 1
1 1 1 0 1 0 0
1 1 0 1 0 0 1
1 0 1 1 0 1 0
0 1 1 1 0 0 0
1 1 1 1 1 1 1
PROPERTIES OF BINARY PARITY
CHECK CODES
• An (n,k) binary parity check code (also called an (n,k) group
code) is a set of code words of length n, which consist of all of
the binary n-vectors which are the solutions of r = (n-k) linearly
independent equations called parity check equations.
• Note that the all-zero vector always satisfies these parity check
equations since any subset of the components of the all-zero
vector sums to 0 modulo 2.
PROPERTIES OF BINARY PARITY
CHECK CODES
• The coefficients of the r = (n-k) linearly independent
parity check equations can be written as a matrix
called the parity check matrix and is denoted H.
• The i-jth entry (ith row and jth column) in this parity
check matrix, hi,j, is equal to 1 if and only if the jth
component of a code word is contained in the ith
parity check equation. Otherwise it is equal to 0.
FOR HAMMING (7,4) CODE
• For the Hamming (7,4) code there were 3 equations
C1 ⊕ C2 ⊕ C3 ⊕ C5 = 0
C1 ⊕ C3 ⊕ C4 ⊕ C6 = 0
C1 ⊕ C2 ⊕ C4 ⊕ C7 = 0.
1 1 1 0 1 0 0
H=1 0 1 1 0 1 0
1 1 0 1 0 0 1 .
FOR HAMMING (7,4) CODE
• For the Hamming (7,4) code there were 3 linearly
independent equations
C1 ⊕ C2 ⊕ C3 ⊕ C5 = 0
C1 ⊕ C3 ⊕ C4 ⊕ C6 = 0
C1 ⊕ C2 ⊕ C4 ⊕ C7 = 0
1 0 0 0 1 1 1
0 1 0 0 1 0 1
0 0 1 0 1 1 0
0 0 0 1 0 1 1
1 1 0 0 0 1 0
1 0 1 0 0 0 1
1 0 0 1 1 0 0
0 1 1 0 0 1 1
0 1 0 1 1 1 0
0 0 1 1 1 0 1
1 1 1 0 1 0 0
1 1 0 1 0 0 1
1 0 1 1 0 1 0
0 1 1 1 0 0 0
1 1 1 1 1 1 1
1 0 0 0 1 1 1
G= 0 1 0 0 1 0 1
0 0 1 0 1 1 0
0 0 0 1 0 1 1
H = [A 1r,r,]
G = [1k,k AT]
• But since the rows of G are all code words, the H and G must
satisfy the matrix equation:
H Gt = 0.
Here Gt is the transpose of the matrix G.
PROPERTIES OF BINARY PARITY
CHECK CODES
• Proof:
1 1 1 0 1 0 0
1 0 1 1 0 1 0
1 1 0 1 0 0 1
1 0 0 0 1 1 1
0 1 0 0 1 0 1
0 0 1 0 1 1 0
0 0 0 1 0 1 1
PROPERTIES OF BINARY PARITY
CHECK CODES
• If X and Y are any two binary vectors of the same
length, define the Hamming distance between X and
Y, denoted dH(X,Y), as the number of positions in
which X and Y differ.
Ci ⊕ Cj = Ck.
• But then,
1 0 0 0 1 1 1
0 1 0 0 1 0 1
0 0 1 0 1 1 0
0 0 0 1 0 1 1
1 1 0 0 0 1 0
1 0 1 0 0 0 1
1 0 0 1 1 0 0
0 1 1 0 0 1 1
0 1 0 1 1 1 0
0 0 1 1 1 0 1
1 1 1 0 1 0 0
1 1 0 1 0 0 1
1 0 1 1 0 1 0
0 1 1 1 0 0 0
1 1 1 1 1 1 1
H C = 0.
1 1 1 0 1 0 0
1 0 1 1 0 1 0
1 1 0 1 0 0 1
• However, the new code has one more parity digit and
the same number of information digits as the original
code. (The new code has block length one more than
the original code.)
MODIFYING A HAMMING (7,4)
CODE
• Original (7,4) code had a parity check matrix given
as:
1 1 1 0 1 0 0
1 0 1 1 0 1 0
1 1 0 1 0 0 1
• The new code is an (8,4) code with parity check
matrix:
1 1 1 0 1 0 0 0
1 0 1 1 0 1 0 0
1 1 0 1 0 0 1 0
1 1 1 1 1 1 1 1
• The new code has dmin = 4.
MODIFYING A HAMMING (7,4)
CODE
• But this parity check matrix does not have a unit matrix on the
right. We can make this happen by replacing the last equation
with the sum of all of the equations resulting in the parity check
matrix:
1 1 1 0 1 0 0 0
1 0 1 1 0 1 0 0
1 1 0 1 0 0 1 0
0 1 1 1 0 0 0 1
• Note that dmin = 4 since 4 columns sum to 0 (e.g., the 1st, 5th, 6th
and 7th) but no fewer than 4 columns sum to 0.
HAMMING (8,4) CODE
• The code words in the new code are:
0 0 0 0 0 0 0 0
1 0 0 0 1 1 1 0
0 1 0 0 1 0 1 1
0 0 1 0 1 1 0 1
0 0 0 1 0 1 1 1
1 1 0 0 0 1 0 1
1 0 1 0 0 0 1 1
1 0 0 1 1 0 0 1
0 1 1 0 0 1 1 0
0 1 0 1 1 1 0 0
0 0 1 1 1 0 1 0
1 1 1 0 1 0 0 0
1 1 0 1 0 0 1 0
1 0 1 1 0 1 0 0
0 1 1 1 0 0 0 1
1 1 1 1 1 1 1 1
DECODING OF BINARY PARITY
CHECK CODES
• Assume that one is using a code with parity check
matrix H and that the transmitter transmits a code
word C.
S = H R = H(C ⊕ E) = HC ⊕ HE = 0 ⊕ HE.
S = HE.
1 1 1 1 1 1 1
BINARY PARITY CHECK CODES:
M.L. DECODING
• If the code words are transmitted with equal apriori
probability over a B.S.C. with error probability p,
p < 0.5, a decoder which results in the smallest
probability of word error is as follows:
Compare the received vector R with every code word and
choose the code word that differs from it in the fewest
positions.
• Then the syndrome is used as the address in the decoding table with
2r entries and the error pattern is read from the table.
• The error pattern is then added to R to find the decoded code word.
BINARY PARITY CHECK CODES:
SYNDROME DECODING
R1…RK ENCODING
TABLE
R = R1…RN
RK+1…RN
⊕
S
E C
DECODING ⊕
TABLE
R = R1…RN
TABLE LOOK UP DECODER:
HAMMING (7,4) CODE
ENCODING
R1…R4 TABLE
SIZE 16
R = R1…R7
R5…R7
⊕
S = S1…S3
DECODING E C
TABLE
⊕
SIZE 8
R = R1…R7
CONSTRUCTING OTHER
BINARY HAMMING CODES
• One can construct single error correcting binary
codes with dmin=3 having other block lengths.
n 7 15 31 63 127 255
k 4 11 26 57 120 247
r 3 4 5 6 7 8
THE GOLAY (23,12) CODE
• This code has dmin=7. A parity check matrix for this
code is:
1 1 1 1 1 0 0 1 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0
0 1 1 1 1 1 0 0 1 0 0 1 0 1 0 0 0 0 0 0 0 0 0
1 1 0 0 0 1 1 1 0 1 1 0 0 0 1 0 0 0 0 0 0 0 0
0 1 1 0 0 0 1 1 1 0 1 1 0 0 0 1 0 0 0 0 0 0 0
1 1 0 0 1 0 0 0 1 1 1 1 0 0 0 0 1 0 0 0 0 0 0
H= 1 0 0 1 1 1 0 1 0 1 0 1 0 0 0 0 0 1 0 0 0 0 0
1 0 1 1 0 1 1 1 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0
0 1 0 1 1 0 1 1 1 1 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 1 0 1 1 0 1 1 1 1 0 0 0 0 0 0 0 0 0 1 0 0
0 0 0 1 0 1 1 0 1 1 1 1 0 0 0 0 0 0 0 0 0 1 0
1 1 1 1 0 0 1 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 1
THE GOLAY (23,12) CODE
• The decoder uses two table: one of size 212 and the
other of size 211.
ENCODING
R1…R12 TABLE
SIZE 212
R = R1…R23
R13…R23
⊕
S = S1…S11
DECODING E C
TABLE
⊕
SIZE 211
R = R1…R23
TABLE LOOK UP DECODER:
GOLAY (24,12) CODE
ENCODING
R1…R12 TABLE
SIZE 212
R = R1…R24
R13…R24
⊕
S = S1…S12
DECODING E C
TABLE
⊕
SIZE 212
R = R1…R24
SHORTENING BINARY PARITY
CHECK CODES
• For any positive integer “a” (a< k), if one starts with
an (n,k) binary parity check code with minimum
distance dmin, one can construct an (n-a, k-a) parity
check code with minimum distance at least dmin.
• The only time that the decoder will be incorrect is if the error
pattern itself is a code word. Then, the syndrome will be all-
zero but errors will have occurred.
ECE 154 C
Spring 2010
BINARY CONVOLUTIONAL
CODES
• A binary convolutional code is a set of infinite length
binary sequences which satisfy a certain set of
conditions. In particular the sum of two code words
is a code word.
⊕ ⊕
Output
Binary Sequence
• There are two output binary digits for each (one) input binary
digit. This gives a rate ½ code.
Input 1 Output 0 1
1 0
Current state 1 0
⊕ 1
Next state 11
10
11
01
10 11
00
11
Input 1
01
01
10
Input Output
S1 S0
State
⊕
s1 s0
a 0 0
b 1 0
c 0 1
d 1 1
NOTION OF STATES
• These states can be put as labels of the
nodes on the tree. 00
00 a
a
11 b
Input 0 00
a 11
10 a
11 c 00
b b
01 d
a 11 a
10
c
00
b
11
Input 1 b 11
a
01 c
01 00
d b
10
d
NOTION OF A TRELLIS
• There are only 4 states. Thus, the states at
the same depth in the tree can be merged,
and the tree can be redrawn as a trellis:
a a a a a
00 00 00 00
11 11 11 11
b b b b
10 10 10
01 01 01
c 11 c 11
c
00 00
01 01
d d d
10 10
OTHER EXAMPLES
• An 8-state rate ½ code Can represent the
⊕ tap connections as:
⊕
Top – 1011
Bottom – 1 1 0 1.
This is written in
octal as:
⊕ ⊕ (1,3) or (1,5).
8-state trellis States
000
100
010
110
001
101
011
111
OTHER EXAMPLES
• An 8-state, rate 2/3 encoder with 2 inputs and 3
outputs.
⊕ ⊕
3 Outputs
2 inputs ⊕ ⊕
11 11 11 11
10 10 10
01 01 01
11 11
00 00
01 01
10 10
HARD DECISION DECODING ON
THE RATE ½, 4-STATE TRELLIS
• Assume that we receive 0 1 1 1 1 0 0 0 . . . We put
the number of differences or “errors” on each
branch.
10/1 1 0/ 0 10/1
01/2 01/1
10/0 10/1
VITERBI DECODING
• We count the total number of errors on each
possible path and choose the path with the
fewest errors. We do this one step at a time.
1 3 1 4 or 3
1 2 0
1 0 1 2
1 1 4 or 3
1 0 1
1 2 1
21 2
1or 4
1 0
2 1
2 3 or 2
0 1
VITERBI DECODING
• Viterbi decoding tells us to choose the
smallest number at each state and eliminate
the other path:
1 3 3
1 2 0
1 0 2
1 1 3
1 0 1
1 1
21 2
1
1 0
2 1
2
0 1
VITERBI DECODING
• Continuing in this fashion we have:
3 or 3
1 3 3
1 2 0
1 0 2
1 3 5 or 1
1
1 0 1
1 1
21 2
1 4 or 3
1 0
2 1 4 or 3
2
0 1
In case of a tie, take either path.
VITERBI DECODING
• This yields:
3
1 3 3
1 2 0
1 0
1 3 1
1
1 0
1
21
1 3
1 0
2 1 3
2
0 1
VITERBI DECODING
• If a decision is to be made at this point we choose
the path with the smallest sum. The information
sequence (i.e., input to the encoder) 0 1 0 1
corresponds to that path.
0 3
0 1
3
VITERBI DECODING
• In a packet communication system, one
could append 0’s at the end of the message
(i.e., the input to the encoder) to force the
encoder back to the all 0 state.
+1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1
b b b b
+1 -1
-1 +1 -1 -1
c +1 -1c +1 -1 c
-1 -1
-1 +1 -1 +1 -1 +1 -1 +1
d d d
+1 -1 +1 -1
SOFT INPUT DECODING
b b b b
+1 -1\3.65 +1 -1/2.96 +1 -1/4.93
-1 +1\3.25 -1 +1/1.36 -1 +1/5.33
c +1 +1/1.36 c +1 +1/10.13
c
-1 -1/2.96 -1 -1/0.13
a a 8.02 6.85 a a a
8.02 2.96 0.13
b b 8.07 b b
3.65 2.96 4.93
1.36 5.33
d d d
2.96 4.93
3.27 Etc, etc., …
DECODING ON THE TRELLIS:
SOFT OUTPUT DECODING
• The Viterbi algorithm finds the path on the trellis that
was most likely to have been transmitted. It makes
hard decisions.
1 1 1 1 0 0 1
PUNCTURING CONVOLUTIONAL
CODES
• Starting with a rate ½ encoder one can construct higher rate
codes by a technique called puncturing.
11 1X 11 1X
b b b b
1X 10 1X
0X 01 0X
c 11 c 1X
c
00 0X
01 0X
d d d
10 1X
• The X’s denote the missing bits and are ignored in
decoding.
PUNCTURING CONVOLUTIONAL
CODES
• Note that the same trellis is used in decoding the
original code and the punctured code.
Input
Outputs
PARALLEL TURBO ENCODER
• Example of a rate 1/3 Turbo encoder:
Input
⊕ ⊕
3 Outputs
Random
Interleaver
⊕ ⊕
PARALLEL TURBO DECODER
Random
Deinterleaver
ECE 154 C
Spring 2010
Introduction to LDPC Codes
• These codes were invented by Gallager in his Ph.D.
dissertation at M.I.T. in 1960.
c1 c2 c3 c5 = 0
c1 c2 c4 c6 = 0
c1 c3 c4 c7 = 0.
1 1 1 0 1 0 0
1 1 0 1 0 1 0
1 0 1 1 0 0 1
0 0 1 0 0 1 1 1 0 0 0 0 c3 c6 c 7 c8 = 0
1 1 0 0 1 0 0 0 0 0 0 1 c1 c2 c5 c12 = 0
0 0 0 1 0 0 0 0 1 1 1 0 c4 c9 c10 c11 = 0
0 1 0 0 0 1 1 0 0 1 0 0 c2 c6 c7 c10 = 0
1 0 1 0 0 0 0 1 0 0 1 0 c1 c3 c8 c11 = 0
0 0 0 1 1 0 0 0 1 0 0 1 c4 c5 c9 c12 = 0
1 0 0 1 1 0 1 0 0 0 0 0 c1 c4 c 5 c7 = 0
0 0 0 0 0 1 0 1 0 0 1 1 c6 c8 c11 c12= 0
0 1 1 0 0 0 0 0 1 1 0 0 c2 c3 c9 c10 = 0
The Parity Check Matrix for the
Simple LDPC Code
0 0 1 0 0 1 1 1 0 0 0 0
1 1 0 0 1 0 0 0 0 0 0 1
0 0 0 1 0 0 0 0 1 1 1 0
0 1 0 0 0 1 1 0 0 1 0 0
1 0 1 0 0 0 0 1 0 0 1 0
0 0 0 1 1 0 0 0 1 0 0 1
1 0 0 1 1 0 1 0 0 0 0 0
0 0 0 0 0 1 0 1 0 0 1 1
0 1 1 0 0 0 0 0 1 1 0 0
Only the lines corresponding to the 1st row and 1st column are shown.
Entire Graph for the Simple
LDPC Code
• Note that each bit node has 3 lines connecting it to
parity nodes and each parity node has 4 lines
connecting it to bit nodes.
Decoding of LDPC Codes by
Message Passing on the Graph
• Decoding is accomplished by passing messages
along the lines of the graph.
• This parity node has the estimates p3, p6, p7, and p8
corresponding to the bit nodes c3, c6, c7, and c8, where pi is an
estimate for Pr[ci=1].
C1 C2 C3 C4
C1 C2 C3 C4
C1 C2 C3 C4
Message Passing for First 4
Bit Nodes for More Iterations
Message Passing
1.000
0.900
0.800
0.700
0.600
C1
Prob[C=1]
C2
0.500
C3
C4
0.400
0.300
0.200
0.100
0.000
Up Down Down Down Up Up Up Down Down Down Up Up Up
C1 0.900 0.500 0.436 0.372 0.805 0.842 0.874 0.594 0.640 0.656 0.968 0.962 0.959
C2 0.500 0.756 0.756 0.436 0.705 0.705 0.906 0.640 0.690 0.630 0.791 0.751 0.798
C3 0.400 0.756 0.756 0.500 0.674 0.674 0.865 0.790 0.776 0.644 0.807 0.820 0.897
C4 0.300 0.756 0.756 0.756 0.804 0.804 0.804 0.749 0.718 0.692 0.710 0.742 0.765
Messages Passed To and
From All 12 Bit Nodes
C1 0.900 0.500 0.436 0.372 0.805 0.842 0.874 0.594 0.640 0.656 0.968 0.962 0.959
C2 0.500 0.756 0.756 0.436 0.705 0.705 0.906 0.640 0.690 0.630 0.791 0.751 0.798
C3 0.400 0.756 0.756 0.500 0.674 0.674 0.865 0.790 0.776 0.644 0.807 0.820 0.897
C4 0.300 0.756 0.756 0.756 0.804 0.804 0.804 0.749 0.718 0.692 0.710 0.742 0.765
C5 0.900 0.500 0.372 0.372 0.759 0.842 0.842 0.611 0.694 0.671 0.976 0.966 0.970
C6 0.900 0.436 0.500 0.756 0.965 0.956 0.874 0.608 0.586 0.643 0.958 0.962 0.952
C7 0.900 0.436 0.500 0.372 0.842 0.805 0.874 0.647 0.628 0.656 0.967 0.969 0.965
C8 0.900 0.436 0.436 0.756 0.956 0.956 0.843 0.611 0.605 0.656 0.963 0.964 0.956
C9 0.900 0.372 0.372 0.500 0.842 0.842 0.759 0.722 0.694 0.703 0.980 0.982 0.981
C10 0.900 0.372 0.500 0.500 0.900 0.842 0.842 0.690 0.614 0.654 0.964 0.974 0.970
C11 0.900 0.372 0.436 0.756 0.956 0.943 0.805 0.667 0.608 0.676 0.967 0.974 0.965
C12 0.900 0.500 0.372 0.756 0.943 0.965 0.842 0.565 0.642 0.657 0.969 0.957 0.955
Messages Passed To and
From All 12 Bit Nodes
Message Passing
1.000
0.900
0.800
C1
0.700 C2
C3
0.600 C4
Prob[C=1]
C5
0.500 C6
C7
0.400 C8
C9
0.300 C10
C11
0.200 C12
0.100
0.000
Up Down Down Down Up Up Up Down Down Down Up Up Up
More Iterations
All 12 Bit Nodes
1.000
0.900
C1
0.800
C2
0.700 C3
C4
0.600 C5
C6
0.500
C7
0.400 C8
C9
0.300 C10
C11
0.200
C12
0.100
0.000
Up Dow n Dow n Dow n Up Up Up Dow n Dow n Dow n Up Up Up Dow n Dow n Dow n Up Up Up Dow n Dow n Dow n Up Up Up
More Interesting Example
All 12 Bit Nodes
1.000
0.900
C1
0.800
C2
0.700 C3
C4
0.600 C5
C6
0.500
C7
0.400 C8
C9
0.300 C10
C11
0.200
C12
0.100
0.000
p
p
n
n
U
U
ow
ow
ow
ow
ow
ow
ow
ow
ow
ow
ow
ow
ow
ow
ow
ow
ow
ow
D
D
Computation at Bit Nodes
• If estimates of probabilities are statistically
independent you should multiply them.
p’ = K pa pb pc.
• Setting (1- K pa pb pc ) equal to K(1- pa)(1- pb)(1- pc) and solving for K
we obtain:
J (# of columns) = K (# of rows)
or J (n) = K (n-k).
• Solving for k / n, we have k/n = (1- J / K), the rate of the code.
• The optimal distributions for the rows and the columns are
found by a technique called density evolution.
• Left degrees
L3 =.44506 l5 =. 26704
L9 =.14835 l17=. 07854
l33=.04046 l65=. 02055
• Right degrees
r7 =.38282 r8 =. 29548
r19=.10225 r20=. 18321
r84=.04179 r85=. 02445
From MacKay’s Website
From MacKay’s Website
• The figure shows the performance of various codes with rate 1/4
over the Gaussian Channel. From left to right:
• Irregular low density parity check code over GF(8), blocklength
48000 bits (Davey and MacKay, 1999);
• JPL turbo code (JPL, 1996) blocklength 65536;
• Regular LDPC over GF(16), blocklength 24448 bits (Davey and
MacKay, 1998);
• Irregular binary LDPC, blocklength 16000 bits (Davey, 1999);
• M.G. Luby, M. Mitzenmacher, M.A. Shokrollahi and D.A. Spielman's
(1998) irregular binary LDPC, blocklength 64000 bits;
• JPL's code for Galileo: a concatenated code based on constraint
length 15, rate 1/4 convolutional code (in 1992, this was the best
known code of rate 1/4); blocklength about 64,000 bits;
• Regular binary LDPC: blocklength 40000 bits (MacKay, 1999).
Conclusions
• The inherent parallelism in decoding LDPC codes suggests
their use in high data rate systems.