0% found this document useful (0 votes)
13 views

LDPC Coding

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

LDPC Coding

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Low-Density Parity-Check Codes

 Low-Density Parity-Check (LDPC) codes are a class of linear block


codes characterized by sparse parity check matrices H
 H has a low-density of 1’s

 LDPC codes were originally invented by Robert Gallager in the early


1960’s but were largely ignored until they were “rediscovered” in the
mid-1990’s by MacKay

 Sparseness of H can yield large minimum distance dmin and reduces


decoding complexity

 Can perform within 0.0045 dB of Shannon limit


Regular vs. Irregular LDPC codes
 An LDPC code is regular if the rows and columns of H have
uniform weight, i.e. all rows have the same number of ones
and all columns have the same number of ones
 Although regular codes had low structural complexity,
performance is poor compared to irregular codes
 An LDPC code is irregular if the rows and columns have
non-uniform weight

6/7/2006 Turbo and LDPC Codes 3/133


LDPC Encoding
 Generate H matrix
• Semi random (Gallager’s Method)
• Fully random
• Structured (Quasi Cyclic method)

• Convert it into systematic form by Gaussian elimination


method to get G matrix

• Multiply the message word u by G to get codeword c=u G

4
Why QC for High Speed Systems?
 Quasi-cyclic structure  A1,1 A1, 2  A1,n 
A A 2, 2  A 2,n 
H  2,1

     
 
A m,1 A m, 2  A m,n 

 A i, j is a circulant matrix: each row is a right cycle-shift of the row


above it and the first row is the right cycle-shift of the last row
 The advantages of QC structure
 Allow linear-time encoding using shift registers
 Allow partially parallel decoding
 Save memory

5
QC LDPC Prime Field Construction
 For prime q, the integers {0,1,….q-1} form the Galois
field GF(q)
 Select ‘a’ and ‘b’ from GF(q) with o(a)=k and o(b)=j
 Form a j x k matrix P of elements from GF(q) that
has its ( s, t ) th element as Ps,t =bsat

6
QC LDPC Prime Field Construction
 Corresponding H is made up of an array of circulant
submatrices

 Eg. Let a=2,b=6 from GF(7) , o(a)=3 and 0(b)=2


 I1 , I2 etc. are 7 x 7 identity matrices whose rows are shifted
to left by one and two positions resp.

 P=

7
Decoding LDPC codes
 Like Turbo codes, LDPC can be decoded iteratively
– Instead of a trellis, the decoding takes place on a Tanner graph
– Messages are exchanged between the v-nodes and c-nodes
– Edges of the graph act as information pathways
 Hard decision decoding
– Bit-flipping algorithm
 Soft decision decoding
– Belief Propagation or Sum-product algorithm
• Also known as message passing/ belief propagation algorithm
 In general, the per-iteration complexity of LDPC codes is less than it is
for turbo codes
– However, many more iterations may be required (max100;avg30)
– Thus, overall complexity can be higher than turbo
Tanner Graphs
 A Tanner graph is a bipartite graph that describes the parity check
matrix H
 There are two classes of nodes:
 Variable-nodes: Correspond to bits of the codeword or equivalently, to
columns of the parity check matrix
 There are n number of v-nodes
 Check-nodes: Correspond to parity check equations or equivalently, to
rows of the parity check matrix
 There are m=n-k number of c-nodes
 Bipartite means that nodes of the same type cannot be connected (e.g. a c-
node cannot be connected to another c-node)
 The ith check node is connected to the jth variable node iff the (i,j)th
element of the parity check matrix is one, i.e. if hij =1
 All of the v-nodes connected to a particular c-node must sum (modulo-2)
to zero
More on Tanner Graphs
 A cycle of length l in a Tanner graph is a path of l distinct edges which
closes on itself
 The girth of a Tanner graph is the minimum cycle length of the
graph.
 The shortest possible cycle in a Tanner graph has length 4

c-nodes
f0 f1 f2

c0 c1 c2 c3 c4 c5 c6

v-nodes
Bi-partite Graph Structure
Belief Propagation- Hard decoding
 c = [1 0 0 1 0 1 0 1]; y = [1 1 0 1 0 1 0 1]
1. In the first step all v-nodes ci send a “message” to
their c-nodes fj containing the bit they believe to be
the correct one for them. At this stage the only
information a v-node ci has, is the corresponding
received ith bit of c, yi. That means for example, that
c0 send a message containing 1 to f1 and f3, node c1
sends messages containing y1(1) to f0 and f1, and so
on.
Belief Propagation- Hard decoding
Belief Propagation- Hard decoding
2. In the second step every check nodes fj calculate a
response to every connected variable node. The response
message contains the bit that fj believes to be the correct
one for this v-node ci assuming that the other v-nodes
connected to fj are correct.

3. Next phase: the v-nodes receive the messages from the


check nodes and use this additional information to decide
if their originally received bit is OK. A simple way to do
this is a majority vote.

4. Go to step 2
Belief Propagation- Hard decoding
Belief Propagation- Soft decoding
 Pi =Pr(ci=1/yi)
 qij is a message sent by the variable node ci to the check
node fj. Every message contains always the pair qij(0) and
qij(1) which stands for the amount of belief that yi is a “0”
or a “1”.
 rji is a message sent by the check node fj to the variable
node ci. Again there is a rji(0) and rji(1) that indicates the
(current) amount of belief in that yi is a “0” or a “1”.
Belief Propagation- Soft decoding
1. All variable nodes send their qij messages. Since no
other information is available at this step, qij(1)=Pi and
qij(0)=1-Pi
Belief Propagation- Soft decoding
2. The check nodes calculate their response messages rji

So they calculate the probability that there is an even


number of 1’s among the variable nodes except ci(that is
exactly what Vj\i means). This probability is equal to the
probability rij(0)
Belief Propagation- Soft decoding
3. The variable nodes update their response messages to the
check nodes. This is done according to the following equations,

whereby the constants Kij are chosen in a way to ensure that


qij(0)+qij(1)=1. Ci\j now means all check nodes except fj
Belief Propagation- Soft decoding
Again check nodes update their belief and so on..

When the algorithm converge final beliefs at variable


nodes are computed and sent to check nodes as
Belief Propagation- Soft decoding
The final decision making at check nodes are done as

If the current estimated code word fulfills now the parity


check equations the algorithm terminates. Otherwise
termination is ensured through a maximum number of
iterations.
4. Go to step 2
LDPC Vs Turbo Codes
 Probabilistic Error floors in Turbo Codes – Avoided
through proper design of interleaver
 Deterministic Error floor in LDPC – avoided through
proper degree distribution obtained through density
evolution
 In low noise channel conditions LDPC converge faster-
In noisy channels Turbo needs lesser iterations
 At lower code rates Turbo perform better
LDPC Vs Turbo Codes
 Digital Video Broadcasting (DVB) – Internationally
accepted open standards for digital television by European
Telecommunication Standards Institute
 The latest version of the standard DVB-S2 uses a
concatenation of an outer BCH code and inner LDPC code
 DVB-RCS (Return Channel over Satellite) which is an
interactive on demand multimedia satellite
communication system, uses duo binary Turbo over GF(4)
Thank You

You might also like