0% found this document useful (0 votes)
14 views

Turbo Encoder Theory

Forward Error Correction (FEC) allows a receiver to detect and correct errors in transmitted data without requesting retransmission. It works by adding redundant error-correcting codes during encoding at the sender. This redundancy enables the receiver to correct a limited number of errors. FEC improves reliability but reduces bandwidth efficiency compared to techniques like automatic repeat request that retransmit corrupted data. The maximum error correction capability depends on the specific FEC code used.

Uploaded by

Raveendranath KR
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Turbo Encoder Theory

Forward Error Correction (FEC) allows a receiver to detect and correct errors in transmitted data without requesting retransmission. It works by adding redundant error-correcting codes during encoding at the sender. This redundancy enables the receiver to correct a limited number of errors. FEC improves reliability but reduces bandwidth efficiency compared to techniques like automatic repeat request that retransmit corrupted data. The maximum error correction capability depends on the specific FEC code used.

Uploaded by

Raveendranath KR
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Forward Error Correction (FEC)

In telecommunication, information theory, and coding theory, forward error


correction (FEC) or channel coding is a technique used for controlling errors in data
transmission over unreliable or noisy communication channels.

The central idea is the sender encodes their message in a redundant way by using an
error-correcting code (ECC).

The redundancy allows the receiver to detect a limited number of errors that may
occur anywhere in the message, and often to correct these errors without
retransmission. FEC gives the receiver the ability to correct errors without needing a
reverse channel to request retransmission of data, but at the cost of a fixed, higher
forward channel bandwidth.

FEC processing in a receiver may be applied to a digital bit stream or in the


demodulation of a digitally modulated carrier. For the latter, FEC is an integral part of
the initial analog-to-digital conversion in the receiver. The Viterbi decoder implements
a soft-decision algorithm to demodulate digital data from an analog signal corrupted
by noise.
Forward Error Correction (FEC)

Many FEC coders can also generate a bit-error rate (BER) signal which can be used as
feedback to fine-tune the analog receiving electronics.

The maximum fractions of errors or of missing bits that can be corrected is determined
by the design of the FEC code, so different forward error correcting codes are suitable
for different conditions.
Forward Error Correction (FEC)

FEC is accomplished by adding redundancy to the transmitted information using a


predetermined algorithm. A redundant bit may be a complex function of many original
information bits. The original information may or may not appear literally in the
encoded output; codes that include the unmodified input in the output are
systematic, while those that do not are non-systematic.
A simplistic example of FEC is to transmit each data bit 3 times, which is known as a
(3,1) repetition code. Through a noisy channel, a receiver might see 8 versions of the
output, see table below.
Forward Error Correction (FEC) – Averaging Noise

FEC could be said to work by "averaging noise"; since each data bit affects many
transmitted symbols, the corruption of some symbols by noise usually allows the
original user data to be extracted from the other, uncorrupted received symbols that
also depend on the same user data.

Because of this "risk-pooling" effect, digital communication systems that use FEC tend
to work well above a certain minimum signal-to-noise ratio and not at all below it.

This all-or-nothing tendency — the cliff effect — becomes more pronounced as


stronger codes are used that more closely approach the theoretical Shannon limit.

Interleaving FEC coded data can reduce the all or nothing properties of transmitted
FEC codes when the channel errors tend to occur in bursts. However, this method has
limits; it is best used on narrowband data.
Forward Error Correction (FEC) – Averaging Noise

Most telecommunication systems used a fixed channel code designed to tolerate the
expected worst-case bit error rate, and then fail to work at all if the bit error rate is
ever worse. However, some systems adapt to the given channel error conditions:
hybrid automatic repeat-request uses a fixed FEC method as long as the FEC can
handle the error rate, then switches to ARQ when the error rate gets too high;
adaptive modulation and coding uses a variety of FEC rates, adding more error-
correction bits per packet when there are higher error rates in the channel, or taking
them out when they are not needed.
Channel – Shannon’s Theorem

Noisy channel coding theorem(Shannon’s theorem/ limit) establishes that for any given
degree of noise contamination of a communication channel, it is possible to communicate
discrete data (digital information) nearly error-free upto a computable maximum rate
through the channel.

This theorem describes maximum possible efficiency of error correcting methods versus
levels of noise interference and data corruption.

This does not describe how to construct the error-correcting method, it only tells us how
good the best possible method can be.

The theorem states that – given a noisy channel with capacity C and information
transmitted at a rate R, then if R<C, there exists codes that allow the probability of error
at the receiver to be made arbitrarily smaller. It means, it is theoretically possible to
transmit information nearly without error at any rate below a limiting rate, C.
Channel – Shannon’s Theorem

Conversely, if R>C an arbitrarily small probability of error is not achievable. All codes will
have a probability of error greater than a certain positive minimal level and this level
increases as the rate increases.

So information cannot be guaranteed to be transmitted reliably across a channel at rates


beyond the channel capacity.

This theorem does not address the case of R=C.

The channel capacity C can be calculated from the physical properties of a channel; for a
band limited channel with Gaussian noise using Shannon-Hartley theorem.
Channel Capacity – Shannon Hartley Theorem

This theorem tells us the maximum rate at which information can be transmitted over a
communications channel of a specified bandwidth in the presence of noise.

The theorem establishes Shannon’s channel capacity for such a communication link, a
bound on the maximum amount of error-free digital data (information) that can be
transmitted with a specified bandwidth in the presence of noise interference, assuming
that the signal power is bounded, and that Gaussian noise process is characterized by a
known power or power spectral density.

Channel capacity C, meaning theoretical tightest upper bound on information rate


(excluding error correcting codes) of clean (arbitrarily low BER) data can be sent with a
given average signal power, S through an analog communication channel subject to
additive white Gaussian noise of power N is

C = Blog2 ( 1 + S/N)
C: Channel capacity in bits per second
B: Bandwidth of channel in hertz (pass band in case of modulated signal)
S: Average received signal power over the bandwidth, in watts or volts squared
N: Average noise of interference power over bandwidth, in watts or volts squared

You might also like