Assignment - Digital Communication
Assignment - Digital Communication
e.g. consider we have letters s1 and s2 with probability of 0.9 and 0.1 respectively. And the self-
information and codeword will be shown below
symbol Probability Codeword
S1 0.9 0
S2 0.1 1
Let us calculate the efficiency of symbol by symbol and block of symbol encoding
For symbol by symbol
The probability and codeword of block symbol encoding are listed below in the table
R =∑ P(si)nk
=0.243x1+0.312x2+0.312x3+0.0664x3
R =1.29 bits/symbol
η = 72.8%
Therefore, we can say that block symbol is more efficient than symbol by symbol
encoding.
1.2. From the output side, the generated codeword (i.e., group of bits that are assigned to a given
symbol) can be of fixed length (in terms of the number of bits per codeword) or variable length.
Which of these two options do you think is efficient? Explain by taking a simple example.
R=∑ P( X )n k =2x(1/2+1/4+1/8+1/8)=2
H(x)=−∑ P( x i)log2 P(xi)
=1.75bits/symbol
H (X ) 1.75bits /symbol
η= X100%= x100%=0.875%
R 2bits /symbol
R=∑ P( X )n k =1.875
H(x)=−∑ P( x i)log2 P(xi)
=1.75bits/symbol
H (X ) 1.75bits /symbol
η= X100%= x100%=93.3%
R 1.875bits /symbol
Therefore, we can conclude that variable length encoding is more efficient than fixed
length encoding.
1.3. If we combine the options in 1.1 and 1.2, we have four ways of implementing source coding.
Mention these four ways and which way do you think is more efficient? Explain by using a
simple example.
Answer: -
block symbol coding
Symbol by symbol coding
Fixed length coding
Variable length coding
And Block of symbol coding is more efficient
e.g. Let us say we have symbols x1, x2 and x3 with probabilities 0.45, 0.35, 0.2, respectively.
And the code word will be
Note: - I take this example directly from the slide
R= ∑ P( X )nk=3.0675 bits/symbol
H(X)=∑ P ( X ) I ( X )=3.036bits/symbol
η =H(x)/R
3.0675bits /symbol
= *100%=99%
3.036 bits/symbol
And if we continuous the process to three or four symbol at a time we can get 100%
efficiency.
entropy
Answer: - η = X 100 %=¿H(x)/Rx100%
average code length
2.1. Using your own terms, explain what this efficiency measures.
Answer: - It measures the performance and reliability of the code
2.2. Should this efficiency be greater than one? Or it is less or equal to one? Both? Explain the
implication of having efficiency in the three indicated ranges.
Answer: - Efficiency should be less than equal to one. Efficiency equal to one results a good
reliable of transmission without error. if its less than one, there will be some error in the decoder
side and if its greater than one, the transmitted information is incorrect or full of error because
efficiency cannot be greater than one.
Answer: -
demarcation or synchronization is the process of recovering data from receiver side. In
variable length code one problem is that if burst or random error causes a decoding
failure all the subsequent data may be incorrectly decoded, therefore a method of
demarcation or synchronization is needed to ensure the decoding of subsequent data is
correct after the loss of data due to error. once resynchronization occurs data can be again
decoded correctly.
by inserting a number of distinct keywords, each consisting of a synchronizing sequence
and an explicit or implicit cyclic count into the data at intervals.
4.1. Using Huffman algorithm, generate the codewords corresponding to the symbols.
Answer: -
4.4. If fixed-length encoding is used instead, compute the average codeword length and
efficiency of this fixed-length coding?
Answer: -The average codeword length of fixed length code will be
R=log27=2.8 bits/symbol
Therefore, we take approximate value of 2.8 =3 bits/symbol
4.5. Compare the efficiencies of Huffman code and Fixed-length encoding. What do you
conclude from the results?
Answer: - Efficiency of Huffman code is 100% and fixed length ending has 87.5% efficiency.
Therefore, Huffman coding is more efficient than fixed length encoding.