Turbo Encoder Theory
Turbo Encoder Theory
The central idea is the sender encodes their message in a redundant way by using an
error-correcting code (ECC).
The redundancy allows the receiver to detect a limited number of errors that may
occur anywhere in the message, and often to correct these errors without
retransmission. FEC gives the receiver the ability to correct errors without needing a
reverse channel to request retransmission of data, but at the cost of a fixed, higher
forward channel bandwidth.
Many FEC coders can also generate a bit-error rate (BER) signal which can be used as
feedback to fine-tune the analog receiving electronics.
The maximum fractions of errors or of missing bits that can be corrected is determined
by the design of the FEC code, so different forward error correcting codes are suitable
for different conditions.
Forward Error Correction (FEC)
FEC could be said to work by "averaging noise"; since each data bit affects many
transmitted symbols, the corruption of some symbols by noise usually allows the
original user data to be extracted from the other, uncorrupted received symbols that
also depend on the same user data.
Because of this "risk-pooling" effect, digital communication systems that use FEC tend
to work well above a certain minimum signal-to-noise ratio and not at all below it.
Interleaving FEC coded data can reduce the all or nothing properties of transmitted
FEC codes when the channel errors tend to occur in bursts. However, this method has
limits; it is best used on narrowband data.
Forward Error Correction (FEC) – Averaging Noise
Most telecommunication systems used a fixed channel code designed to tolerate the
expected worst-case bit error rate, and then fail to work at all if the bit error rate is
ever worse. However, some systems adapt to the given channel error conditions:
hybrid automatic repeat-request uses a fixed FEC method as long as the FEC can
handle the error rate, then switches to ARQ when the error rate gets too high;
adaptive modulation and coding uses a variety of FEC rates, adding more error-
correction bits per packet when there are higher error rates in the channel, or taking
them out when they are not needed.
Channel – Shannon’s Theorem
Noisy channel coding theorem(Shannon’s theorem/ limit) establishes that for any given
degree of noise contamination of a communication channel, it is possible to communicate
discrete data (digital information) nearly error-free upto a computable maximum rate
through the channel.
This theorem describes maximum possible efficiency of error correcting methods versus
levels of noise interference and data corruption.
This does not describe how to construct the error-correcting method, it only tells us how
good the best possible method can be.
The theorem states that – given a noisy channel with capacity C and information
transmitted at a rate R, then if R<C, there exists codes that allow the probability of error
at the receiver to be made arbitrarily smaller. It means, it is theoretically possible to
transmit information nearly without error at any rate below a limiting rate, C.
Channel – Shannon’s Theorem
Conversely, if R>C an arbitrarily small probability of error is not achievable. All codes will
have a probability of error greater than a certain positive minimal level and this level
increases as the rate increases.
The channel capacity C can be calculated from the physical properties of a channel; for a
band limited channel with Gaussian noise using Shannon-Hartley theorem.
Channel Capacity – Shannon Hartley Theorem
This theorem tells us the maximum rate at which information can be transmitted over a
communications channel of a specified bandwidth in the presence of noise.
The theorem establishes Shannon’s channel capacity for such a communication link, a
bound on the maximum amount of error-free digital data (information) that can be
transmitted with a specified bandwidth in the presence of noise interference, assuming
that the signal power is bounded, and that Gaussian noise process is characterized by a
known power or power spectral density.
C = Blog2 ( 1 + S/N)
C: Channel capacity in bits per second
B: Bandwidth of channel in hertz (pass band in case of modulated signal)
S: Average received signal power over the bandwidth, in watts or volts squared
N: Average noise of interference power over bandwidth, in watts or volts squared