0% found this document useful (0 votes)
15 views

ETN642 Lec11 ChannelCoding Handouts

Lect11 information theory and coding

Uploaded by

elmifarah57
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

ETN642 Lec11 ChannelCoding Handouts

Lect11 information theory and coding

Uploaded by

elmifarah57
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

ETN642 Information Theory and Coding

(Spring 2024)

Lecture 11

Prof Dr Shurjeel Wyne

CH 6 Linear Block Codes


Book: Digital Communications: Fundamental and Applications
by Bernard Sklar 2 ed.

Outline
• Introduction to Channel coding for error correction
• Linear block codes
• Error detection and correction capability
• Channel Encoding
• Channel Decoding
• Sub-types
• Hamming codes
• Cyclic codes

1
What is Channel Coding (FEC)
 Channel coding is employed to reduce a communication link’s error rate
without retransmissions (also called forward error correction (FEC))

 In channel Coding, structured redundancy introduced into information bits


before transmission to avoid bit errors due to channel impairments like noise,

 Additional redundant codeword symbols increase raw symbol rate over the
channel  bandwidth requirement increases

 Error-rate versus Bandwidth efficiency trade-off becomes possible with FEC

 Three types of FEC codes:


 Block Codes
 Convolutional Codes
 Turbo Codes
Note: In ETN642, focus only on forward error correcting codes
3
(also possible to design channel codes to only detect errors & not correct them)
3

Block diagram of a DCS

Channel Pulse Bandpass


Format
encoder modulate modulate

Digital modulation
Channel

Digital demodulation

Channel Demod.
Format Detect
decoder Sample

2
Performance Trade-offs possible with FEC
• Error rate vs. bandwidth (A->B vs A->C) Example Case: desired PB reduction obtained
but SNR maintained

• Power vs. bandwidth (D->F vs D->E) Example Case: SNR decrease due to
reduction in TX power but PB maintained

• Data rate vs. bandwidth (D->F vs D->E) Example Case: SNR decrease due to
increase in bit rate at constant TX power
PB but PB maintained

Coded
Coding gain:
The coding gain of a channel code is defined as the
A
reduction in Eb/N0 (relative to the uncoded case) achieved
F
by using this code
C
 Eb  E  B
GdB =   −  b 
 N 0 [dB], uncoded  N 0 [dB], coded D
E
Uncoded
By convention, the coding gain is always defined for some
target PB achievable in the high SNR regime
Eb / N 0 (dB)
NOTE: Under constraint that Rb maintained after Channel Coding, faster symbol rate
5
needed  more bandwidth required relative to uncoded case

Linear block codes


• At TX, information bit stream divided into blocks of k bits
• Each block of k information bits encoded into larger block of n code symbols
• Codeword symbols modulated and sent over communication channel
• At RX, codewords decoded to extract corresponding information bits

Channel
Data block n-bit Codeword
Encoder
n = (n-k) redundant bits + k information bits
k information bits
code rate: R= k/n
• Error correction capability of a channel code depends on its distance properties
• The distance between two codewords is the number of symbols in which
they differ
• minimum distance of a code is the smallest distance among set of distances
between any pair of codewords of this code
6

3
Some Binary Arithmetics
• The Binary field {0,1}, also called a Galois field of order 2
and represented as GF(2) is closed under modulo 2
addition and multiplication
Addition Multiplication
0⊕0 = 0 0⋅0 = 0
0 ⊕1 = 1 0 ⋅1 = 0
1⊕ 0 = 1 1⋅ 0 = 0
1⊕1 = 0 1 ⋅1 = 1

Vector Space
Let V is a set of vectors and F a field of elements
called scalars,

Then V forms a vector space over F if:

1. V is closed under

i. Vector addition ∀u, v ∈ V  u + v = v + u ∈ V

ii. Scalar multiplication ∀a ∈ F , ∀v ∈ V  a ⋅ v = u ∈ V

2. V contains all-zero vector,


8 ∀u,0 ∈ V  u + 0 = 0 + u = u
8

4
Vector Space…cont’d
• Spanning set:
{ }
• A set of vectors G = v1 , v 2 ,  , v n is said to to span V or to be a
spanning set for vector space V, if all vectors in the vector space V
can be constructed as linear combinations of vectors in G
Example: 24 binary 4-tuples form the vector space V4

{(1000), (0110), (1100), (0011), (1001)} spans V4 .


• Basis:
• A spanning set of V that has minimal cardinality is called a basis for V
(Cardinality of a set is the number of elements in the set.)

Example:
{(1000), (0100), (0010), (0001)} is a basis for V4 .

Linear block Code as Vector Space


Linear block code (n,k)
Let C is a set of 2k unique n-tuples, then C is a linear block code if and only
if it is a subspace of the vector space Vn

Vk → C ⊂ Vn

• Members of C are called codewords


• Any linear combination of codewords is also a codeword
• The all-zero word is a codeword

mapping (encoding)
Vn
Vk
C

10 Basis of C

10

5
Linear block code as Vector Space…cont’d

2n binary n-tuples form


the vector space Vn

• We want the codewords to be as far apart from one another


as possible such that even if the codewords are corrupted by
noise during transmission, they may still be decoded correctly,
with a high probability

• But also, we strive for high coding efficiency (large k/n) by


packing Vn vector space with as many codewords as possible
11

11

Hamming Weight and Hamming Distance


• The Hamming weight of a vector U, denoted by w(U), is the
number of non-zero elements in U.

• The Hamming distance between two vectors (or codewords)


U and V, is the number of elements in which they differ.
d (U, V ) = w(U ⊕ V )

• minimum (hamming) distance dmin of a block code C:

d min = min d (U i , U j ) = min w(U i ), Ui , U j ∈ C


i≠ j i

12

12

6
Block Code : Error Detection and Correction Capability

• For a code C, with minimum distance dmin, error-detection capability e


is given by

e = d min − 1
in received n-bit block, up to e erroneous bits can be detected

• For a code C, with minimum distance dmin, error-correcting capability t


is given by

 d − 1
t =  min 
 2 
in received n-bit block, up to t erroneous bits can be corrected
13

13

Generation of Codewords of Linear Block Code (1/4)


mapping (encoding)
Vn
Vk
C

• A matrix G is constructed by taking as its rows the Basis of C


vectors forming basis of C, .

{v1 , v 2 ,, v k }

14

14

7
Generation of Codewords of Linear Block Code (2/4)
• Encoding an (n,k) block code

U = mG G is (k X n) generator
U is (1 X n) codeword vector matrix for the given
from the set of 2k possible linear block code
codewords {U}

m is (1 x k) row vector of the k ...


information bits

• The k rows of matrix G, are linearly independent code vectors from


the set {U}, they can generate all 2k code vectors in {U}.
15

15

Generation of Codewords of Linear Block Code (3/4)

• Example: Block code (6,3)


Message vector Codeword

000 000000
 V1   1 1 0 1 0 0  100 110100
G =  V 2  =  0 1 1 0 1 0  010 011010
 V 3   1 0 1 0 0 1  110 1 01 1 1 0
001 1 01 0 0 1
101 0 111 0 1
011 1 1 0 011
1 11 0 0 0 1 11

16

16

8
Generation of Codewords of Linear Block Code (4/4)
• Systematic block code (n,k)
• For a systematic code, the the k information bits appar together
either as the first (or last) k elements in the codeword
• The generator matrix for a systematic code is given as

G = [ P Ik ]
P is the parity array part of
Ik = k × k identity matrix generator matrix, and
pij = (0 or 1)
P = k ×(n − k ) matrix

U = mG
U = (u1 , u2 ,..., un ) = ( p1 , p2 ,..., pn −k , m1 , m2 ,..., mk )
17 parity bits message bits

17

Decoding of Codewords of Linear Block Code

Parity check matrix


• For any linear code with Generator matrix Gkxn, we
can find a matrix H ( n − k )×nwhose rows are orthogonal
to rows of G

GH T = 0  UHT=0
• H is called the parity check matrix. We can use H to
test whether a received vector is a valid member of
the codeword set C = {U}
• For systematic linear block codes, H is given as

18
H = [I n − k PT ]
18

9
Decoding of Codewords of Linear Block Code…cont’d

Data source Format


m Channel U Modulation
encoding
channel
Channel Demodulation
Data sink Format
decoding r Detection

U is one of 2k n-tuples,
• r = U+e but error pattern e (caused
by noise) can force r to
r = (r1 , r2 ,...., rn ) received codeword become one of remaining 2n
e = (e1 , e2 ,...., en ) error pattern n-tuples

• Syndrome testing: Syndrome test performed on


corrupted code vector r or on
• S is called syndrome of r, corresponding error e that caused it, gives
to the to error pattern e. same syndrome pattern S

19

19

Syndrome Decoding using Standard Array

zero
codeword

coset leaders coset


(correctable error patterns)

The standard array has 2k columns and 2(n-k) rows


Each entry in the standard array is a unique n-tuple

20

20

10
Syndrome Decoding using Standard Array

• Standard array and syndrome table decoding


T
1. Calculate syndrome of r, S = rH
2. Locate the coset leader, eˆ = e i , whose syndrome equals
3. Calculate Uˆ = r + eˆ and corresponding m̂
ˆ = r + eˆ = (U + e) + eˆ = U + (e + eˆ )
U
• Note that
• If eˆ = e error is corrected.
• If eˆ ≠ e undetectable decoding error occurs.

• The correctable error patterns are the 2n-k coset leaders in the first column of
the standard array
• Decoding will be correct if, and only if, the error pattern caused by the channel
is one of the coset leaders

21

21

Syndrome Decoding using Standard Array

• Example: Standard array for the (6,3) code


codewords

000000 110100 011010 101110 101001 011101 110011 000111


000001 110101 011011 101111 101000 011100 110010 000110
000010 110111 011000 101100 101011 011111 110001 000101
000100 110011 011100 101010 101101 011010 110111 000110
001000 111100   
010000 100100 coset
100000 010100 
010001 100101   010110

Coset leaders(Correctable error patterns)

22

22

11
Syndrome Decoding using Standard Array

Example of correctable error

1 0 0
0 1 0
Syndrome Table 
0 0 1
Error pattern Syndrome U = (101110) transmitted. T
H = 
000000 000 1 1 0
r = (001110) is received. 0 1 1
000001 101  
The syndrome of r is computed : 1 0 1
000010 011
000100 110 S = rHT = (001110)H T = (100)
001000 001 Error pattern corresponding to this syndrome is
010000 010 eˆ = (100000)
100000 100
The corrected vector is estimated
010001 111
ˆ = r + eˆ = (001110) + (100000) = (101110)
U

23

23

Hamming codes
• Hamming codes are a subclass of linear block codes and
belong to the category of perfect codes.
• Hamming codes can be expressed as a function of a single
integer m ≥ 2
Codelength: n = 2m −1
Numberof information bits: k = 2m − m −1
Numberof paritybits: n-k = m
Errorcorrectioncapability: t = 1
• The n columns of the parity-check matrix, H, comprise all
2m-1 non-zero binary m-tuples, where n = 2m-1.
A t-error-correcting perfect code cannot correct any error patterns
24 with hamming weight greater than t

24

12
Hamming codes
• Example: Systematic Hamming code (7,4)
1 0 0 0 1 1 1
H = 0 1 0 1 0 1 1 = [I 3×3 PT ]
0 0 1 1 1 0 1

0 1 1 1 0 0 0
1 0 1 0 1 0 0
G= = [P I 4×4 ]
1 1 0 0 0 1 0
 
1 1 1 0 0 0 1
25

25

Example of the block codes

PB
8PSK

QPSK

Eb / N 0 [dB]
26

26

13

You might also like