Unit 5 - Part-Ii
Unit 5 - Part-Ii
Part-II
Measures of Information
• Information is a measure of uncertainty. The less is the
probability of occurrence of a certain message, the higher is
the information.
Solution:
W.K.T , K −1
1
H (k) = pk log 2 ( )
k =0 pk
Rate of information
Hence, R = rH bits/sec
Source Coding
• Source coding (lossless data compression) means that we will
remove redundant information from the signal prior the
transmission.
• Basically this is achieved by assigning short descriptions to the
most frequent outcomes of the source output and vice versa.
• The common source-coding schemes are
huffman coding
Shannon-Fano Coding
lempel-ziv coding
prefix coding
What is need for source coding ?
The average code word length for any distortionless source encoding
scheme is bounded as
According to source coding theorem, the entropy represents a
fundamental limit on the average number of bits per source symbol , but
no smaller than the entropy ,Hence we can rewrite the efficiency of a
source coder in terms of the entropy as
Huffman Coding
With the Huffman code in the binary case the two least probable
source output symbols are joined together, resulting in a new
message alphabet with one less symbol
ADVANTAGES:
• uniquely decodable code
• smallest average codeword length
DISADVANTAGES:
• LARGE tables give complexity
• sensitive to channel errors
Procedure
• Create a list for the symbols, in decreasing order of
probability. The symbols with the lowest probability are
assigned a ‘0’ and a ‘1’.
Source Symbol
Symbol Probability
sk pk
s0 0.1
s1 0.2
s2 0.4
s3 0.2
s4 0.1
Solution
Source Stage I
Symbol
sk
s2 0.4
s1 0.2
s3 0.2
s0 0.1
s4 0.1
Solution
Source Stage I Stage II
Symbol
sk
s2 0.4 0.4
s1 0.2 0.2
s3 0.2 0.2
s0 0.1 0.2
s4 0.1
Solution
Source Stage I Stage II Stage III
Symbol
sk
s2 0.4 0.4 0.4
s0 0.1 0.2
s4 0.1
Solution
Source Stage I Stage II Stage III Stage IV
Symbol
sk
s2 0.4 0.4 0.4 0.6
s0 0.1 0.2
s4 0.1
Solution
Source Stage I Stage II Stage III Stage IV
Symbol
sk
0
s2 0.4 0.4 0.4 0.6
0
s1 0.2 0.2 0.4 0.4 1
0
s3 0.2 0.2 0.2 1
0
s0 0.1 0.2 1
s4 0.1 1
Solution
Source Stage I Stage II Stage III Stage IV Code
Symbol
sk
0
s2 0.4 0.4 0.4 0.6 00
0
s1 0.2 0.2 0.4 0.4 1
10
0
s3 0.2 0.2 0.2 1
11
0
s0 0.1 0.2 1
010
s4 0.1 1
011
The average code word length is,
Symbol Probability
h 0.1
e 0.1
l 0.4
o 0.25
w 0.05
r 0.05
d 0.05
Shannon-Fano Coding
• As long as any sets with more than one member remain, the
same process is repeated on those sets, to determine successive
digits of their codes. When a set has been reduced to one
symbol, of course, this means the symbol's code is complete
and will not form the prefix of any other symbol's code.
Shannon’s Channel Coding theorem
A discrete memoryless source has six source symbols S0, S1, S2, S3, S4
and S5 with their probability assignments P(S0) = 0.30, P(S1) = 0.25,
P(S2) = 0.20, P(S3) = 0.12, P(S4) = 0.08 and P(S5) = 0.05. Encode the
source with Shannon-Fano codes. Calculate its efficiency.
Solution
The average code word length is,
=
Linear Block Codes
Channel Coding
Why?
To increase the resistance of digital communication systems to
channel noise via error control coding
How?
By mapping the incoming data sequence into a channel input
sequence and inverse mapping the channel output sequence into an
output data sequence in such a way that the overall effect of channel
noise on the system is minimized
27
The implication of Error Control Coding
28
Linear Block Codes
An (n,k) block code indicates that the codeword has n number of bits and
k is the number of bits for the original binary message
A code is said to be linear if any two code words in the code can be
added in modulo-2 arithmetic to produce a third code word in the code
Linear Block Codes
Codes in which the message bits are transmitted in an unaltered form.
Example : Consider an (n,k) linear block code
There are 2k number of distinct message blocks and 2n number of distinct code words
By applying this sequence of message bits to a linear block encoder, it adds n-k bits to
the binary message
Using vector representation they can be written in a row vector notation respectively as
30
Linear Block Codes
With a systematic structure, a code word is divided into 2 parts. 1 part occupied by the
binary message only and the other part by the redundant (parity) bits.
The (n-k) left-most bits of a code word are identical to the corresponding parity bits
The k right-most bits of a code word are identical to the corresponding message bits
31
Linear Block Codes
In matrix form, we can write the code vector,c as a partitioned row vector in terms of
vectors m and b
c=[b m]
Given a message vector m, the corresponding code vector, c for a systematic linear (n,k)
block code can be obtained by a matrix multiplication
c=m.G
32
Linear Block Codes
G = [Ik P]
The identity matrix simply reproduces the message vector for the first
k elements of c
The coefficient matrix generates the parity vector,b via b=m.P
The elements of P are found via research on coding.
33
Cyclic Codes
34
Cyclic Codes
35
Cyclic Codes – Encoding Procedure
36
Cyclic Codes - Example
For the (7,4) Hamming code is defined by its generator polynomials, g(X)
that are factors of X7 + 1
37
Cyclic Codes - Example
c(X) = X + X2 + X3 + X6
38
Cyclic Codes – Implementation
rn-k-1
Encoding starts with the feedback switch closed, the output switch in the message
bit position, and the register initialized to the all-zero state.
The k message bits are shifted into the register and delivered to the transmitter.
After k shift cycles, the register contains the b check bits.
The feedback switch is now opened and the output switch is moved to the check
bits to deliver them to the transmitter.
39
Cyclic Codes – Implementation example
The shift-register encoder for the (7,4) Hamming Code has (7-4=3) stages
When the input message is 0011, after 4 shift cycles the redundancy bits are delivered
40
Cyclic Codes – Implementation Exercise
The shift-register encoder for the (7,4) Hamming Code has (7-4=3) stages
When the input message is 1001, after 4 shift cycles the redundancy bits are delivered
1 0 0 0 0 1 1
The check bits
0 0 1 1 1 1 0
is 011
0 1 1 0 1 1 1
1 1 1 1 1 1 0
41