0% found this document useful (0 votes)
118 views

Channel Coding: TM355: Communication Technologies

The document provides an introduction to channel coding and error correction techniques. It discusses how channel coding works to reduce error rates by detecting and correcting errors that occur during transmission. It covers basic techniques like parity checks and cyclic redundancy checks, which allow a receiver to detect errors in received data. More advanced techniques like Hamming codes and block codes are also introduced, which can detect and correct single-bit errors through the use of redundant bits and coding in blocks. The document serves as an overview of key concepts and terminology in channel coding.

Uploaded by

HusseinJdeed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
118 views

Channel Coding: TM355: Communication Technologies

The document provides an introduction to channel coding and error correction techniques. It discusses how channel coding works to reduce error rates by detecting and correcting errors that occur during transmission. It covers basic techniques like parity checks and cyclic redundancy checks, which allow a receiver to detect errors in received data. More advanced techniques like Hamming codes and block codes are also introduced, which can detect and correct single-bit errors through the use of redundant bits and coding in blocks. The document serves as an overview of key concepts and terminology in channel coding.

Uploaded by

HusseinJdeed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 58

TM355:

COMMUNICATION TECHNOLOGIES
BLOCK 2

PART 1:
CHANNEL CODING
1

Arab Open University


Prepared By: Dr. Naser Zaeri
OUTLINE

• Introduction
• Error detection
• Error-correction coding basics
• Techniques of error correction
• Advanced block codes
• Convolutional coding and trellis-coded
modulation
• Hybrid automatic repeat request (HARQ)
• Comparing and choosing codes

2
1. INTRODUCTION [1/3]

• The best performance of the channel cannot be


achieved by modulation alone.
• A combination of channel coding and modulation is
needed.

3
1. INTRODUCTION [2/3]

• Figure 1.2 shows the main function of the channel coding layer:
that of reducing the error rate.
• Channel coding and decoding can correct most of the errors that
occur in the channel.
• Controlling errors is the main function of channel coding.

4
1. INTRODUCTION [3/3]
• There are two different types of error-control coding: error-detection and
error-correction coding.
• Error-detection coding only allows to know when received data contains
errors, whereas error-correction coding also allows to correct those
errors.
• With error detection, if there is a return channel so that the destination can
send a signal back to the source, it is possible to request the retransmission
of data that is found to contain errors.
• Some systems require the error-free receipt of data to be acknowledged,
and the retransmission is triggered automatically if no receipt is
forthcoming within a predetermined time interval.
 automatic repeat request (ARQ).
• Therefore to distinguish between error detection combined with ARQ
on the one hand, and error-correction coding on the other, the latter is
often called forward error correction (FEC). 5
2. ERROR DETECTION
2.1 PARITY CHECKS [1/3]
• The simplest idea in error-control coding is the parity check.
• The principle: for a given block of bits, you add one further
bit – the parity bit – which is chosen to be a 1 or a 0 so as to
ensure that the total number of 1s in the block of bits,
including the parity bit, is an even number.
• All the discussions assume the use of ‘even parity’, which is the most
common.
• However, parity checking can be done just as well for odd parity, in
which case the parity bit is chosen to ensure that the total number of 1s
in the block is odd.

6
2.1 PARITY CHECKS [2/3]

• Any single error in the block will now make the parity of the
block of bits odd, because an error changes either a 1 to a 0 or
a 0 to a 1 – and in either case the number of 1s in the block
changes from even to odd.
• The receiver can therefore just count the number of 1s in
the received block of bits; if the number is odd, it knows
there has been at least one error.
• Generally, systems operate with low error rates, meaning that
the probability of there being more than one error in a
block of bits is small, so a parity check is a good way of
being reasonably confident that there are no errors.

7
2.1 PARITY CHECKS [3/3]

• Question: Which of the following might be valid code words


using even parity?
a) 01101010
b) 01001100
c) 11110010
d) 11110110
• From the list given, (a) and (d) contain an even number of 1s
so may be valid.
• The other two contain an odd number of 1s so must be invalid.

8
2.2 CYCLIC REDUNDANCY CHECKS
(CRCS) [1/5]
• Although cyclic redundancy check coding is done on binary
data, it is convenient to explain the principles using
‘ordinary’ (denary or base 10) numbers.
• Assume that a denary message, M, consists of a collection of
denary digits, say 7654321.
• We divide this message number by a shorter number: G.
• Assume G = 99.
• The idea is to write 7654321 in terms of the largest number less
than 7654321 that is divisible exactly by 99, plus a remainder:
 7654 321 = 77316 × 99 + 37.
• We say that the result of division leaves a remainder, R, which
in this case is 37.
9
2.2 CYCLIC REDUNDANCY CHECKS
(CRCS) [2/5]

• The idea behind CRCs is that the remainder (the


digits of which are the check digits) is sent to the
decoder together with the message.
• The decoder then calculates the remainder for
itself and compares its own locally calculated value
with the value it received from the encoder: if they
are the same, it is assumed that all is well and there
have been no errors.

10
2.2 CYCLIC REDUNDANCY CHECKS
(CRCS) [3/5]
• Question: Suppose, with G = 99, that a decoder receives the
following messages and check digits:
a) message 234567, check digits 36
b) message 345678, check digits 22.
• In which case must there have been errors?
• Solution:
a) The remainder on dividing 234567 by 99 is 36, which is the same as
the received check, so there is no reason to suspect that there have
been any errors.
b) The remainder on dividing 345678 by 99 is 69, which is not the same
as the received check, so there must have been some errors.

11
2.2 CYCLIC REDUNDANCY CHECKS
(CRCS) [4/5]

• Question: Suppose that we use a denary cyclic


redundancy check, with G = 999.
a) What is the code word if the message is 45454545?
b) The following code words are received at a decoder. What
are the message digits in each case, and do they appear to
contain errors?
i. 32132132296
ii. 52310642002

12
2.2 CYCLIC REDUNDANCY CHECKS
(CRCS) [5/5]
Solution:
a) The check digits is calculated as follows:
 45454545/ 999 = 45500.045, meaning the remainder is 45 454 545 –
45 500 × 999 = 45.
 So the transmitted sequence is 45454545045.
b)
i. For 32132132296, the data is 32132132 and the received check digits are 296.
Calculating the check digits (from 32132132 divided by 999) gives 296. This is
the same as the received check digits, so does not indicate any errors.
ii. For 52310642002, the data is 52310642 and the received check digits are 002.
Calculating the check digits (from 52310642 divided by 999) gives 005. This is
different from the received check digits, so there must have been errors.

13
2.3 GENERATOR POLYNOMIALS FOR
BINARY CRCS [1/2]
• In practice, CRCs work directly on binary sequences.
• It uses a special sort of arithmetic known as modulo-2
arithmetic.
• Modulo-2 arithmetic can be performed on binary sequences very easily
and rapidly in either hardware or software.
• Also, employing modulo-2 arithmetic leads to a simplification
in the method used to check for errors at a decoder  just
have to divide the received sequence by G and see if there is
any remainder:
• If there is no remainder (that is, if the remainder is 0) then there is no
evidence of errors.
• If it is anything other than zero, there must have been at least one error.
14
2.3 GENERATOR POLYNOMIALS FOR
BINARY CRCS [2/2]
•  CRCs are specified by the value of G, which is a
binary number, G is usually described by a
polynomial, the ‘generator polynomial’.
• Example: the polynomial represents the binary
number 11001.

15
3. ERROR-CORRECTION CODING BASICS

• Two types of error-correcting code:


• Rectangular coding: rarely used in practice, but is
an easy way to show how forward error correction
becomes possible by using multiple parity checks
on a block of data.
• Hamming coding: very efficient method of error
correction which, as well as having practical
applications.

16
3.1 RECTANGULAR CODES [1/5]

• Rectangular code (also known as a product code) can detect the presence
of a single error, and also identify which bit is in error and therefore
correct it.
• A rectangular code is an example of a block code.
• The general concept of a (binary) block code in the context of error-control
coding is that the encoder takes the input data in successive blocks of k bits
and encodes them as n bits, where n > k so that the encoded data has some
redundancy.
• The code is described as an (n, k) block code.
• Block coding is one of the two broad categories of error-correcting code.
The other category is convolutional Coding.
• One feature that distinguishes a block code from a convolutional code is that a block code
is memoryless.
• The encoding of the k bits does not depend upon what went before.

17
3.1 RECTANGULAR CODES [2/5]

• If no more than one error occurs per block, a rectangular code can
locate and hence correct it.
• Any one error will cause the parity check to fail in one row and one column.
• Knowing which row and which column have failed provides the coordinates
that identify the bit that is in error – and once the location of this bit is
known, it can be corrected.
• If two errors (or any even number of errors) occur in the same row, the
parity check for that row will still pass – so although there will be failed
parity checks in the affected columns, it will not be possible to tell which
row is the one that contains the errors.
• The same is true for an even number of errors in the same column.
• Rectangular codes are therefore not very good at correcting error
bursts.

18
3.1 RECTANGULAR CODES [3/5]

•  Although rectangular codes are very simple, they are


not very efficient because they use a relatively large
number of parity digits per data digit.
• One measure of the efficiency of an (n, k) block code
is the ratio k/n, which is called the code rate.
• Another related measure is the code redundancy,
given by:

19
3.1 RECTANGULAR CODES [4/5]

• Question: The figure represents a code word from a


rectangular code.
a) How many parity digits are used per code word to check for errors?
b) Describe this code using the (n, k) notation.
c) Calculate the code rate and the redundancy of this code.
d) Assuming that no more than one digit is in error, how many different
errors can be corrected using this code?

20
3.1 RECTANGULAR CODES [5/5]

• 
Solution:
a) The diagram shows that eight parity digits are used to check
for errors, out of 20 digits in total.
b) n is the total number of bits in the code, 20, and k is the
number of bits in the message, which is 12. The code is
therefore a (20, 12) code.
c) The code rate = k/n = 12/20 = 60%.
The redundancy = = = 40%.
d) The rectangular code can correct a single error at any digit
position, including the parity digits.

21
3.2 HAMMING CODES [1/8]

•  For any integer m, it is possible to construct a


Hamming code that uses m parity digits to correct
any single error in a code word of size n digits,
where:

• The n digits of the code word are made up of the k


message digits and the m parity digits, so n = k + m.

22
3.2 HAMMING CODES [2/8]

• 
Activity:
a) What would be the total number of digits per Hamming code word if
four of the digits were used for parity checks?
b) Describe this code using the (n, k) notation and calculate its code rate.
Solution:
a) four corresponds to m, the number of parity digits. Substituting m = 4
the total number of digits:

b) Of the 15 digits, four are parity digits and 11 are the digits of the
original message, so n is 15 and k is 11  It is a (15, 11) code and the
code rate is 11/15 = 0.73.

23
3.2 HAMMING CODES [3/8]

• The parity-check digits in a Hamming code are


applied to groups of message digits in such a way that
a single error can be located, and then corrected.
• The parity digits can be positioned in the coded word
so that a binary number representing the result of the
checks, the syndrome, points directly to the error, if
there is one.

24
3.2 HAMMING CODES [4/8]

• Figure 1.6 represents a seven-digit Hamming code for a four-


digit message.

25
3.2 HAMMING CODES [5/8]

26
3.2 HAMMING CODES [6/8]

• Activity: If the following codes are received, state


whether there have been any errors, and give the
decoded output. (Assume that the probability of there
being more than one error in a received code word is
negligible.)
i. 1110000
ii. 1101011

27
3.2 HAMMING CODES [7/8]

i. 1110000:
• The three parity checks are as follows:
• Parity 1 tests parity on digits 1, 3, 5 and 7:
1110000. This has an even number of ones (two), so passes.
• Parity 2 tests parity on digits 2, 3, 6 and 7: 1110000. This has an even
number of ones (two), so passes.
• Parity 3 tests parity on digits 4, 5, 6 and 7: 1110000. This has an even
number of ones (none), so passes.
• All three parity checks pass, so the syndrome is 000, and there are no
errors. The decoded data is just extracted from the codeword (digits 3,
5, 6 and 7): 1000.
• Alternatively, scanning through the codewords in Table 1.2, the
received codeword 1110000 is found to be a valid codeword
corresponding to the original data word 1000.
28
3.2 HAMMING CODES [8/8]

ii. 1101011:
• Parity 1: 1101011. This has an even number of ones (two), so passes.
• Parity 2: 1101011. This has an odd number of ones (three), so fails.
• Parity 3: 1101011. This has an odd number of ones (three), so fails.
• Writing 0 for a pass and 1 for a fail, the syndrome is 110. This is the
binary number 6, indicating that digit number 6 is in error.
• The corrected code word is therefore 1101001. The decoded data is
extracted from the corrected code word (digits 3, 5, 6 and 7): 0001.

29
3.3 HAMMING DISTANCE [1/2]

• This measure of closeness between code words is an important


concept in error-control coding, and it is called Hamming
distance.
• Defined by the number of places in which the code words
differ.
• Example:
• 0001111 and 0000111 gives a distance of 1.
• 1100110 and 0000111 have a distance of 3.
• Identical code words have a distance of 0.
• The Hamming distance of a code is the minimum Hamming
distance between any two code words used by the code.

30
3.3 HAMMING DISTANCE [2/2]

• In general, codes with a minimum Hamming


distance of n can:
• Correct up to (n-1)/2 errors if n is odd, or up to
(n/2)−1 errors if n is even.
• They can detect up to (n − 1) errors without
correcting them.

31
4. TECHNIQUES OF ERROR CORRECTION
4.3 INTERLEAVING [1/3]

• Most error-correcting codes cannot correct bursts of errors, yet


errors often do occur in bursts.
• There might be bursts of tens or even hundreds of errors
occurring very infrequently, and no errors for the rest of the
time.
• A technique that can be employed to allow error-
correcting codes to protect against bursts is
interleaving.

32
4.3 INTERLEAVING [2/3]

33
4.3 INTERLEAVING [3/3]

• Delay,
  referred to as latency,
is an important measure of the
quality of a communications
channel.
• If we are interleaving p code
words from a block error-
correcting code with code
words of length n bits (using a
matrix with p rows and n
columns), the total delay will
be bits.

34
5. ADVANCED BLOCK CODES
5.1 LDPC CODES [1/2]
• Low-density parity check
(LDPC) codes are block
codes that are based on parity
checks.
• The important information in
this technique, the position of
the arrows, is also given in
the form of a parity-check
matrix.

35
5.1 LDPC CODES [2/2]

• Each row is a parity check, and the positions of the bits


involved in that check are indicated by ones.
• LDPC codes have a parity-check matrix that does not
contain many ones.
• They have large block sizes, and many parity checks, but
each parity check involves only a small number of bits 
‘low density’: the density of ones in the matrix.
• Applications:
• used in IEEE 802.16 (WiMAX)
• used in 802.11 (WiFi)
• used in 10 Gigabit Ethernet
• satellite broadcasting.
36
5.2. EAN-13 CODE (1/2)

• A non-binary code used as error detection in product barcodes.


• Has 12 identification digits and a single check digit.

37
5.2. EAN-13 CODE (2/2)
• The check digit is calculated as follows:
1) Count digit positions from the left to the right, starting at 1.
2) Sum all the digits in odd positions.
1) (In the example shown in the figure, this is 9 + 8 + 5 + 1 + 2 + 5 = 30 –
note that the final 5 is not included since this is the check digit.)
3) Sum all the digits in even positions and multiply the result by 3.
(In the example, this is (7 + 0 + 2 + 4 + 5 + 7) × 3 = 75.)
4) Add the results of step 2 and step 3
5) Take the answer modulo-10. (In the example, the sum is 30+75 =
105, so the units digit is 5.)
6) If the answer to step 5 was 0, this is the check digit. Otherwise the
check digit is given by ten minus the answer from step 5. (In the
example, this is 10 – 5 = 5.)
7) The check digit is appended to the right of the 12 identification digits.
38
5.2 REED–SOLOMON CODES [1/4]

• Reed–Solomon codes (RS codes) were invented in 1960 by


Irving S. Reed and Gustave Solomon of MIT.
• Designed to work with digits to bases other than 2 (binary).
• Designed for symbols that are digits in other bases.
•  Of particular interest are codes that use digits to
base 256.
• Key feature of RS codes: its ability to deal with long bursts of
errors.
• Application: used in Blu-ray discs.

39
5.2 REED–SOLOMON CODES [2/4]
• The (n, k) notation is used for non-binary Reed-Solomon
codes as well.
• As before, n is the encoded block size, but the block now
consists of digits to base q.
• Similarly, k is the number of message digits, each of which is
to base q.
• The other n − k digits are the parity digits (base q symbols.)
• With the base-256 code: an (n, k) code has a block size of n
bytes made up of k bytes of information and n − k parity
bytes.
• Maximum block size is 255 digits.

40
5.2 REED–SOLOMON CODES [3/4]

• There are two steps to correcting errors in RS codes:


1. Identify which of the digits (bytes) have errors.
2. Correct those digits.
• The decoder produces syndromes from the parity checks.
• In RS codes there are two sets of syndromes:
• One set identifies where the errors are – which digits (bytes) they
are in
• the other set identifies how to correct them.
• If the locations of the errors are known – which of the bytes
contain errors – then, it is possible to correct more errors than
if the errors’ locations are not known.

41
5.2 REED–SOLOMON CODES [4/4]

•A  symbol already known to be in error and just needing


correction, is called in the language of coding theory an
erasure.
• Symbols that have errors where the location is not known
in advance are just called symbol errors.
• In general, in each block of n digits an (n, k) RS code can:
• correct up to (n-k)/2 symbol errors
• Or, correct up to n − k erasures.
• More generally, if there are both erasures and symbol
errors to be corrected, the code can correct symbol errors
and erasures, where: .

42
6. CONVOLUTIONAL CODING AND TRELLIS-CODED
MODULATION [1/2]

• The key difference between convolutional codes and block


codes is that convolutional codes have memory.
• What that means is that the coding of message data depends not only on
the current message, but also on what has gone before.
• Convolutional codes are often constructed with high
redundancy to give good error correction.
• They generally allow easy construction of encoders and
decoders, and are used where high rates of error correction
are required from a minimum of circuitry.

43
6. CONVOLUTIONAL CODING AND TRELLIS-CODED
MODULATION [2/2]

• In satellite communications, for example, the received


signal power might be low and variable, which means
that the bit error rate in the channel is sometimes high.
• But physical weight has to be restricted in a satellite and
electrical power is limited, so simple circuitry is an
advantage.
• Similarly, in mobile communications the circuitry needs
to be kept to a minimum to save weight and minimise
battery drain in the handset.

44
6.1 ENCODING [1/4]

• The simplest possible convolutional


code would encode each single bit by
two bits.
• Such a coding process can be
displayed in the form of a ‘tree’.
• The coding is done by reading from
left to right; at each branching point
take the upper branch if an input bit
is a 0 and the lower branch if the
input bit is a 1.
• At each stage, the label on the branch
shows what the output is.
45
6.1 ENCODING [2/4]

• Since each input leads to the output of two


bits, this is equivalent to a (2, 1) code in
the (n, k) terminology.
• The coding of the input sequence 1100 is
shown in Figure 1.21(b) by the path
highlighted in red.
• The output is 11 01 10 00.

46
6.1 ENCODING [3/4]
• Note that the tree expands rapidly with more input bits because it
explicitly shows the coding of every possible combination of input bits.
• However, an important simplification is possible: the tree repeats itself.
• The expanding tree may be fully represented by the trellis shown in Figure
1.22.
• It is the existence of these two states that provides the memory

47
6.1 ENCODING [4/4]
• Although the encoder here has only two states, more advanced
convolutional encoders have more.
• Figure 1.23 is the trellis for an encoder that has four internal states.
• The same coding rule applies.

48
6.2 DECODING [1/2]
• A common method of decoding convolutional codes is
known as Viterbi decoding after its inventor, Andrew J.
Viterbi.
• How Viterbi decoding works: at any point, the decoder takes
account of both what has gone before and what comes after in
deciding how to decode the current data.
• Viterbi decoding explores all the paths through the trellis,
comparing the Hamming distance between each path and
the received sequence; but whenever paths converge, all
except one of them is discarded.

49
6.2 DECODING [2/2]

• Example: Suppose a decoder of the code with the tree shown


in Figure receives the following sequences. Decide what each
sequence should be decoded to by looking at possible paths
through the tree.
a) 00 00 00 00
b) 00 10 00 00
• Solution:
a) This corresponds to the path when data is 0000.
b) There is no path through the decoder that would result
in this output stream, so there must have been one or
more errors. However, if we assume that the third bit in
the stream is in error, and should be a 0 rather than a 1,
then we are back to 00 00 00 00, which should be decoded
to 0000 as in part (a). This is the most likely possibility
(assuming the smallest number of errors.) 50
7. HYBRID AUTOMATIC REPEAT REQUEST
(HARQ) [1/6]
• There are two ways of reducing the bit error rate: error detection with
ARQ, and forward error correction (FEC).
• Depending on the channel, the flexibility of ARQ can be beneficial.
• An optical-fibre link connecting telephone exchanges, for example,
would be very stable, whereas a typical wireless link between a
mobile phone and the base station would vary all the time.
• Mobile communications networks cope with the variable channel
conditions by choosing the modulation and coding to suit the
channel at the time.
• Real-time interactive communication, such as telephony, would be
severely hampered by long and variable delays such as would result
from ARQ over a large physical distance, whereas these delays would
not be so important for, say, downloading a web page.
51
7. HYBRID AUTOMATIC REPEAT REQUEST
(HARQ) [2/6]

52
7. HYBRID AUTOMATIC REPEAT REQUEST
(HARQ) [3/6]
• One method used to achieve some of the merits of both ARQ
and FEC is a hybrid system that makes use of both and is
known as hybrid automatic repeat request (HARQ).
• This system encodes the data with an FEC and corrects any
errors if it can, but if the errors exceed the correction
capability of the FEC scheme, it requests retransmission of the
data.
• There are a number of different types of HARQ.

53
7. HYBRID AUTOMATIC REPEAT REQUEST
(HARQ) [4/6]
• Type I HARQ is the simplest.
• If the errors cannot all be corrected, then the data is resent,
coded exactly the same as it was the first time.
• The receiver then just decodes the newly received data.
• If all the errors can be corrected on this occasion, then
transmission of that data is complete.
• If there are still errors that cannot be corrected, the data needs
to be resent again.
• In a more complex version of Type I HARQ, known as Chase
combining, the received retransmitted data is combined with
the originally received data so that the decoding the second
time uses information from both transmissions.
54
7. HYBRID AUTOMATIC REPEAT REQUEST
(HARQ) [5/6]
• Type II HARQ the retransmitted signal is different.
• The most common version is known as incremental
redundancy and works by successive retransmissions sending
additional redundant information to aid with error correction.
• The combined retransmissions build up to form ever more
powerful (more redundant) error-correction coding.
• For example, the initial transmission might contain only the
message with error-detection – not error-correction – coding.
• If there are no errors, the data is extracted and transmission
has been successful with the minimum of redundancy.
• If there are errors, the transmitter is notified and sends parity
bits for error correction.
55
7. HYBRID AUTOMATIC REPEAT REQUEST
(HARQ) [6/6]
• This retransmission does not include the message digits, only the
parity bits.
• However, the initial transmission together with the retransmission
provides the full message-plus-parity bits to constitute a code word of
an error-correcting code.
• The receiver can now try to correct the errors that were known to be
in the initial transmission but which could not, at that stage, be
corrected.
• If successful, then the error-free transmission has been accomplished.
• If, however, there are still errors, then more parity bits are sent so that
the data from the three transmissions combined constitutes the code
word of a more powerful (more redundant) error-correcting code.

56
8. COMPARING AND CHOOSING CODES [1/2]

• How to select which code is best in any given application?


• Here are a few factors that may be relevant:
• Comparing bit error rates
• Coding gain
• What is the distribution of errors – do they come in bursts, for
example?
• What is the structure of the data? Is it already in bytes, packets,
frames?
• Is it streamed? Is it multiplexed or interleaved?

57
8. COMPARING AND CHOOSING CODES [2/2]

• How serious is it if an error escapes detection?


• Is there a return channel (so that ARQ can be used)?
• Is latency an issue (is it real-time data)?
• Is complexity in the encoder or decoder an issue?
• Is the code patented? (For example, patents are still current on
turbo codes, but have expired for LDPC codes.)

58

You might also like