0% found this document useful (0 votes)
8 views30 pages

II-II Ece Acs-Unit 5

Uploaded by

rupeshroyal0101
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views30 pages

II-II Ece Acs-Unit 5

Uploaded by

rupeshroyal0101
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

SVCE TIRUPATI

COURSE MATERIAL

ANALOG COMMUMNICATIONS
SUBJECT (19A04403T)

UNIT 5

COURSE B.TECH

ELECTRONICS AND COMMUNICATION


DEPARTMENT ENGINEERING

SEMESTER 22

PREPARED BY B MADHAN MOHAN


(Faculty Name/s) Assistant Professor

Version V-1

PREPARED / REVISED DATE 18-03-2021

1|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
TABLE OF CONTENTS – UNIT 1
S. NO CONTENTS PAGE NO.
1 COURSE OBJECTIVES 3
2 PREREQUISITES 3
3 SYLLABUS 3
4 COURSE OUTCOMES 3
5 CO - PO/PSO MAPPING 4
6 LESSON PLAN 4
7 ACTIVITY BASED LEARNING 4
8 LECTURE NOTES 4
5.1 INTRODUCTION 4
5.2 INFORMATION 5
5.3 INFORMATION AND UNCERTAINITY 6
5.4 ENTROPY 7
5.5 INFORMATION RATE 8
5.6 JOINT AND CONDITIONAL ENTROPY 8
5.7 MUTUAL INFORMATION 10
5.8 DISCREET MEMORYLESS CHANNEL 11
5.9 TYPES OF CHANNELS 12
5.10 CHANNEL CAPASITY 14
5.11 CHANNEL CODING THEOREM 15
5.12 CHANNEL CAPACITY THEOREM 18
5.13 SOURCE CODING THEOREM 20
9 PRACTICE QUIZ 24
10 ASSIGNMENTS 26
11 PART A QUESTIONS & ANSWERS (2 MARKS QUESTIONS) 26
12 PART B QUESTIONS 27
13 SUPPORTIVE ONLINE CERTIFICATION COURSES 28
14 REAL TIME APPLICATIONS 28
15 CONTENTS BEYOND THE SYLLABUS 29
16 PRESCRIBED TEXT BOOKS & REFERENCE BOOKS 29
17 MINI PROJECT SUGGESTION 30

2|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI

1. Course Objectives
The objectives of this course is to
1. To introduce various modulation and demodulation techniques of analog
communication system.
2. To analyze different parameters of analog communication techniques.
3. Know Noise Figure in AM & FM receiver systems.
4. Understand Function of various stages of AM, FM transmitters and Know
Characteristics of AM & FM receivers.
5. Understand the concepts of information theory.

2. Prerequisites
Students should have knowledge on
1. Probability
2. random processes

3. Syllabus
UNIT V
Information Theory: Introduction, Information and Entropy, and its properties, source
coding Theorem, Data Compaction – Prefix coding, Huffman coding, Discrete
Memory less channels, Mutual Information, and its properties, Channel capacity,
Channel coding Theorem, Application to binary symmetric channels, differential
entropy and mutual information, Information capacity theorem, implication of
information capacity theorem, Rate Distortion, Illustrative problems.

4. Course outcomes
1. Understand the concepts of various Amplitude, Angle and Pulse Modulation
schemes. Understand the concepts of information theory with random processes.
(L1).
2. Apply the concepts to solve problems in analog and pulse modulation schemes.
(L2)
3. Analysis of analog communication system in the presence of noise. (L3)
4. Compare and contrast design issues, advantages, disadvantages and limitations
of various modulation schemes in analog communication systems.(L4)
5. Solve basic communication problems & calculate information rate and channel
capacity of a discrete communication channel (L5).

3|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI

5. Co-PO / PSO Mapping


Machine
PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 P10 PO11 PO12 PSO1 PSO2
Tools
CO1 3 3 2 2

CO2 3 3 2 2

CO3 3 3 2 2

CO4 3 3 2 2

CO5 3 3 2 2

6. Lesson Plan

Lecture No. Weeks Topics to be covered References


Introduction, Information and Entropy, and its properties, and its
1 T1
properties
2 source coding Theorem T1, R1
1
3 Data Compaction – Prefix coding, Huffman coding T1, R1

4 Discrete Memory less channels, Mutual Information T1, R1

5 Channel capacity, Channel coding Theorem T1, R2

6 Application to binary symmetric channels T1, R1


2
7 differential entropy and mutual information, T1, R1

8 Information capacity theorem T1, R1

9 implication of information capacity theorem T1, R1


3
10 Rate Distortion, Illustrative problems T1, R1

7. Activity Based Learning


1. Fundamentals of Information Theory and Coding
2. Source Encoder, Shannon Hartley Law
8. Lecture Notes
5.1 INTRODUCTION
The purpose of a communication system is to facilitate the transmission of signals
generated by a information source to the receiver end over a communication
channel.
Information theory is a branch of probability theory which may be applied to the
study of communication systems. Information theory allows us to determine the
information content in a message signal leading to different source coding
techniques for efficient transmission of message.
4|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
In the context of communication, information theory deals with mathematical
modeling and analysis of communication rather than with physical sources and
physical channels.
In particular, it provides answers to two fundamental questions,
i. What is the irreducible complexity below which a signal cannot be
compressed?
ii. What is the ultimate transmission rate for reliable communication over a
noisy channel?
The answers to these questions lie on the entropy of the source and capacity of a
channel respectively.
• Entropy is defined in terms of the probabilistic behavior of a source of
Information.

• Channel Capacity is defined as the intrinsic ability of a channel to convey


information; it is naturally related to the noise characteristics of the channel.

5.2 INFORMATION
Information is the source of a communication system, whether it is analog or
digital communication. Information is related to the probability of occurrence of
the event. The unit of information is called the bit.
Based on memory, information source can be classified as follows:
i. A source with memory is one for which a current symbol depends on the
Previous symbols.
ii. A memory less source is one for which each symbol produced is
independent of the previous symbols.
iii. A Discrete Memory less Source (DMS) can be characterized by list of
symbols, the probability assignment to these symbols, and the specification
of the rate of generating these symbols by the source.
Consider a probabilistic experiment involves the observation of the output
emitted by a discrete source during every unit of time (Signaling Interval). The
source output is modeled as a discrete random variable (S), which takes on
symbols from a fixed finite alphabet.

S= { S0,S1,S2,… ,SK-1}

With probabilities,

P(S=sk)= pk , k=0,1,2,…….,K-1.

The set of probabilities that must satisfy the condition


∑ =1,p

5|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
We assume that the symbols emitted by the source during successive signaling
intervals are statistically independent.

A source having the alone described properties is called a Discrete Memory less
Source (DMS).

5.3 Information and Uncertainty


Information is related to the probability of occurrence of the event is the
uncertainty, more is the information associated or related with it. Consider the
event S=sk, describing the emission of symbol sk by the source with probability pk
clearly, if the probability pk=1 and pi=0 for all i≠k, then, there is no surprise and
therefore no information when symbol sk is emitted. If, on the other hand, the
source symbols occur with different probabilities and the probability pk is low,
then there is more surprise and, therefore, information when symbol sk is emitted
by the source than when another symbol si, with higher probability is emitted.
Example:

i. Sun rises in East: Here Uncertainty is zero there is no surprise in the statement.
The probability of occurrence is 1 (pk=1).

ii. Sun does not rise in East: Here uncertainty is high, because there is maximum
surprise, maximum information is not possible.

The amount of information is related to the inverse of the probability of


occurrence of the event S = sk as shown in Fig. 1.1. The amount of information
gained after observing the event S = sk, which occurs with probability pk, as the
logarithmic function as shown in the following equation.

5.3.1. Properties
I. Even if we are absolutely certain of the outcome of an event, even
before it occurs, there is no information gained.

II. The occurrence of an event S = Sk either provides some or no information,


but never brings about a loss of information.

III. That is, the less probable an event is, the more information we gain
when itoccurs.

6|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
IV. This additive property follow from the logarithmic definition of I(Sk).

If Sk and Sl are statistically independent.


It is standard practice in information theory to use a logarithm to base 2 with
binary signaling in mind. The resulting unit of information is called the bit, which is
a contraction of the words binary digit.

When pk= ½, we have I(sk) = 1 bit.


Hence, one bit is the amount of information that we gain when one of two
possible and equally likely (i.e..equiprobable) event occurs.
Note that the information I(sk) is positive, because the logarithm of a number less
than one, such as a probability, is negative.
5.4. ENTROPY

The entropy of a discrete random variable, representing the output of a source of


information, is a measure of the average information content per source symbol.
The amount of information I(sk) produced by the source during an arbitrary
signaling interval depends on the symbol sk emitted by the source at the time. The
self-information I(sk) is a discrete random variable that takes on the values I(s0),
I(s1), …, I(sK – 1) with probabilities p0, p1, ….., pK – 1 respectively.
The expectation of I(sk) over all the probable values taken by the random
variable S is given by

The quantity H(S) is called the entropy.


5.4.1. Properties of Entropy
Consider a discrete memoryless source whose mathematical model is
defined by S= { S0, S1,S2,… ,SK-1} with their probabilities are
P(S=Sk)= Pk , K=0,1,2,…….,k-1.
The entropy H(S) of the discrete random variable S is bounded as follows

7|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
Where K is the number of symbols in the alphabet S.
Property 1: H(S) = 0, if, and only if, the probability Pk = 1 for some k, and
the remaining probabilities in the set are all zero; this lower
bound on entropy corresponds to no uncertainty.
Property 2: H(S) = log K, if, and only if, Pk = 1/K for all k (i.e., all the
symbols in the source alphabet are equiprobable); this
upper bound on entropy corresponds to maximum
uncertainty.

5.5. INFORMATION RATE


Information rate (R) is represented in average number of bits of
information per second.
R = r H(S)
Where,
R - Information rate (Information bits / second)
H(S) - the Entropy or average information (bits / symbol) and
r - The rate at which messages are generated (symbols / second).

5.6. JOINT AND CONDITIONAL ENTROPY


Let us define the following entropy functions for a channel with m inputs
and n outputs.

Where,
H(X)- Average uncertainty of channel input
H(Y)- Average uncertainty of channel output
H(X/Y) and H(Y/X) –Conditional Entropy
H(X,Y)- Joint Probability
P(Xi)-Input Probability
P(Yi)-Output Probability
P(Xi/Yi),P(Yi/Xi)- Conditional Probability and
8|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
P(Xi,Yi)- Joint Probability
Relationship between Conditional and Joint Entropy
1. H(X, Y)= H(X/Y)+H(Y)
2. H(X, Y)=H(Y/X)+H(x)
Proof

6.

9|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI

5.7. MUTUAL INFORMATION


The mutual information I(X;Y) is a measure of the uncertainty about the Channel
input, which is resolved by observing the channel output.
Mutual information I(Xi,Yj) of a channel is defined as the amount of information
transferred when Xi transmitted and Yj received.

Where, I(Xi,Yj)- Mutual Information


P(Xi/Yj)- Conditional probability that Xi transmitted and Yj received
P(Xi)-Probability of symbol Xi for transmission
5.7.1. Properties of Mutual Information
A) Symmetry Property
The mutual information of a channel is symmetric in the sense that
I(X;Y) =I(Y;X)

10|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
B) Expansion of The Mutual Information Property
The mutual information of a channel is related to the joint entropy of
the channel input and channel output by

C) Non Negativity Property


The mutual information is always nonnegative. We cannot lose
information, on the average, by observing the output of a channel .
I(X;Y) ≥ 0

5.8. DISCRETE MEMORYLESS CHANNELS


Discrete memory less channel is a statistical model with an input x and output y
(i.e., noisy version of x). Both x and y are random variables.
The channel is said to be discrete, when both x and y have finite sizes. It is said to
be memory less when the current output symbol depends only on the current
input symbol and not on previous ones.

Fig 5.1. Discreet memory less channel


If we are considering the values as

Set of transition probabilities is represented in matrix form

Where P=channel matrix.


Each row of the channel matrix P corresponds to a fixed channel input, whereas
each column of the matrix corresponds to fixed channel output.
Property of channel matrix is,

11|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
Even that the channel input X=xi occurs with probability,

Joint probability distribution of random variables X and Y is

Marginal probability distribution of the output random variable Y is obtained by


averaging out the dependence of p(xi,yj) on xi as

The above equation states that gives the input, a prior probabilities p(xi) and
channel matrix p(yj,xi) we may calculate the probabilities of output symbols p(yj) .
5.9. TYPES OF CHANNELS
LOSSLESS CHANNELS
A channel described by a channel matrix with only one non-zero element in
each column is called a lossless channel. The channel matrix of a lossless channel
will be like,

DETERMINISTIC CHANNEL
A channel described by a channel matrix with one non-zero
element ineach row is called a deterministic channel.

12|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI

NOISELESS CHANNEL
A channel is called as noiseless channel if it is both lossless and deterministic. The
channel matrix of the noiseless channel is as follows

BINARY SYMMETRIC CHANNEL (BSC)


Binary symmetric channel has two input symbols x0=0 and x1=1 and two output
symbols y0=0 and y1=1. The channel is symmetric because the probability of
receiving a 1 if a 0 is sent is the same as the probability of receiving a 0 if a 1 is
sent.
In other words correct bit transmitted with probability 1-p, wrong bit transmitted
with probability p. It is also called “cross-over probability”. The conditional
probability of error is denoted as p.

The entropy h(x) is maximized when the channel input probability p(x0)=p(x1)=1/2,
where x0 and x1 are each 0 or 1.
The Mutual Information I(X:Y) is similarly maximized, so that,

From BSC diagram,

13|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
Capacity of Binary Symmetric Channel (BSC) is

BINARY ERASURE CHANNEL (BEC)


A binary erasure channel (BEC) has two inputs (0,1) and three outputs (0,y,1). The
symbol y indicates that, due to noise, no deterministic decision can be made as
to whether the received symbol is a 0 or 1. In other words the symbol y represents
that the output is erased. Hence the name binary erasure channels (BEC).
The mutual information and channel capacity are
I(X, Y) = H(X)-H(X|Y)
=H(X)-(1-p)H(X)

5.10. CHANNEL CAPACITY

Channel Capacity of a discrete memory less channel as the maximum


average mutual information I(X:Y) in any single use of channel i.e., signaling
interval, where the maximization is over all possible input probability distribution
{p(xi)} on X. It is measured in bits per channel use.

CHANNEL CAPACITY (C) =MAX I (X:Y)

14|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
The channel capacity c is a function only of the transition probabilities P(Yj|Xi),
which defines the channel. The calculation of channel capacity (c) involves
maximization of average mutual information I(X:Y) over k variables
P(Xi) ≥ 0 for all K
and ∑ ( )=1
The transmission efficiency (or) channel efficiency is expressed as,
η=

( ∶ )
η=
( ( ∶ )
( ∶ )
η=
The Redundancy of the channel is expressed as,

R=1
( ∶ )
R=
5.11. CHANNEL CODING THEOREM
The design goal of channel coding is to increase the resistance of a digital
communication system to channel noise. Specifically, channel coding consists of
mapping the incoming data sequence into a channel input sequence and
inverse mapping the channel output sequence into an output data sequence in
such a way that the overall effect of channel noise on the system is minimized.
Mapping operation is performed in the transmitter by a channel encoder,
whereas the inverse mapping operation is performed in the receiver by a
channel decoder, as shown in the block diagram of figure. For simplification
source encoding (before channel encoding) and source decoding (after
channel decoding) process was not included in this fig

Fig 5.1. Block diagram of digital communication system


THEOREM
The channel-coding theorem for a discrete memoryless channel is stated as
follows:
I) let a discrete memoryless source with an alphabet have entropy h(s) for
random variable s and produce symbols once every ts seconds. Let a
discrete memoryless channel have capacity c and be used once every tc
seconds. Then, if

15|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI

There exists a coding scheme for which the source output can be transmitted
over the channel and be reconstructed with an arbitrarily small probability of
error. The parameter C/TC is called the critical rate.
When the system is said to be signalling at the critical rate then it need to satisfy
the below condition

II) Conversely, if

It is not possible to transmit information over the channel and reconstruct it with
an arbitrarily small probability of error.
The ratio Tc/Ts equals the code rate of encoder denoted by ‘r’, where

In short, the Channel Coding Theorem states that if a discrete memoryless


channel has capacity C and a source generates information at a rate less than
C, then there exists a coding technique such that the output of the source
maybe transmitted over the channel with an arbitrarily low probability of symbol
error.

Conversely, it is not possible to find such a code if the code rate ‘r’ is
greater than the channel capacity C. If r ≤ C, there exists a code capable of
achieving an arbitrarily low probability of error.

Limitations
i. It does not show us how to construct a good code.
ii. The theorem does not have a precise result for the probability of symbol
error after decoding the channel output.
MAXIMUM ENTROPY FOR GAUSSIAN CHANNEL
Probability density function of Gaussian Channel / function is

where, σ2 is the average power of the source.


16|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
The maximum entropy is

17|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI

From the above results we can observe the entropy is maximum when P(x)is zero
mean probability density function.
5.12. CHANNEL (INFORMATION) CAPACITY THEOREM OR SHANNON’S THEOREM
The information capacity (C) of a continuous channel of bandwidth B hertz,
affected by AWGN of total noise power with in the channel bandwidth N, is given
by the formula,

where, C - Information Capacity.


B - Channel Bandwidth
S - Signal power (or) Average Transmitted power
N – Total noise power within the channel bandwidth (B)
It is easier to increase the information capacity of a continuous
communication channel by expanding its bandwidth than by increasing the
transmitted power for a prescribed noise variance.

18|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
PROOF
Let us consider the channel or information capacity (C) as the difference
between the entropies of channel output and the total noise power within the
channel bandwidth.

If the signal is band limited, it is sampled at nyquist rate given as 2B, then

19|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
If we substitute the power spectral density of noise as N=N0B, then the channel or
information capacity will be,

TRADE OFF

I. Noiseless channel has infinite bandwidth. If there is no noise in the channel , then
N=0 hence (S/N)= . Then,

II. Infinite bandwidth has limited capacity. If b is infinite, channel capacity is limited,
because noise power (N) increases, (S/N) ratio decreases. Hence, even B
approaches infinity, capacity does not approach infinity.
5.13. SOURCE CODING THEOREM
Efficient representation of data generated by a discrete source is accomplished
by some encoding process. The device that performs the representation is called
a source encoder.
Efficient source encoder must satisfy two functional requirements
I) code nodes produced by the encoder are in binary form.
II)The source code is uniquely decodable, so that the original source sequence
can be reconstructed perfectly from the encoded binary sequence.

FIG 5.3. Source encoder


AVERAGE CODE WORD LENGTH

where,
L - Average number of bits per source symbol
p - Probability of the symbols
I - No. of bits per message

20|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
Coding Efficiency(η)
The value of coding efficiency (Ƞ) is always lesser or equal to 1. The
source encoder is said to be efficient when Ƞ approaches unity.

We know that the average information is also called as entropy

5.13.1. CODING TECHNIQUES

The two popular coding technique used in Information theory are (i) Shannon-
Fano Coding and (ii) Huffman Coding. The coding procedures are explained
in the following section.

Shannon-Fano Coding Procedure

1. Sort the list of symbols in decreasing order of probability.

2. Partition the set into two sets that are close to equiprobable as possible
and assign 0 to the upper set and assign 1 to the lower set.

21|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI

3. Repeat the steps 2 and 3 for each part, until all the symbols are split into
individual subgroups.

4. Assign code for each symbol using the binary values obtained in each
stages.

5.13.2. Huffman Coding – Procedure


1. The messages are arranged in the order of decreasing probability.

22|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
2.The two messages of lowest probabilities are assigned 0 & 1. i.e, combine the
probabilities of two symbols having the lowest probabilities, and reorder
resultant probabilities. This step is called resultant 1.

3. The same procedure is repeated until there are two ordered probabilities
remaining.

23|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
4. Start encoding with the last reduction, which exactly consists of two ordered
probabilities. (i) Assign 0 as the first digit in the code words, for all the source
symbols associated with the first probability. (ii) Assign 1 to the second
probability.

5. Keep regressing this key until the first column is reached.

24|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
9. Practice Quiz
1. The capacity of Gaussian channel is?
a) C = 2B log2 (1+S/N) bits/s
b) C = 2B log2 (1-S/N) bits/s
c)C = B log2 (1+S/N) bits/s
d) C = 2B log2 (1-S/2N) bits/s
2. For M equally likely messages, the average amount of information H is?
a)H = log10M
b) H = log2M
c)H = log10M2
d)H = 2log10M
3. The capacity of a binary symmetric channel, given H(P) is binary entropy function is?
a)1 - H(P)
b) H(P) - 1
c)1 - H(P)2
d) H(P)2 - 1
4. For M equally likely messages, M>>1, if the rate of information R ≤ C, the Probability of
error is?
a) Small
b)Close to unity
c) Not predictable
d) unknown
5. according to Shannon Hartley theorem?
a)The channel capacity becomes infinite with infinite bandwidth
b)The channel capacity does not become infinite with infinite bandwidth
c)Has a tradeoff between bandwidth and Signal to noise ratio
d)Both b and c are correct
6. Which type of cutting tools have wide application on lathes?
a) single point
b) multi point
c) both single point and multi point
d) none
7. Which of the following is the example of multi point cutting tool?
a) milling cutter
b) broaching tool
c) both milling utter and broaching tool
d) none

8. In how many groups, cutting tools can be divided?


a) 2
b) 3
c) 4
d) 6

25|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
9. Which of the following is an example of non-cutting shaping process?
a) Turning
b) Forging
c) Drilling
d) Milling
10. Which of the following is the example of cutting shaping process?
a) Knurling
b) Forging
c) Pressing
d) Drawing
11. The process of metal cutting is affected by the relative motion between the piece of
work and the hard type edge of a cutting tool against the work piece.
a) True
b) False
12. In how many groups, various metal working processes can be classified?
a) 2
b) 3
c) 4
d) 8
10. Assignments

S.No Question BL CO
1 Give the significance of Shannon’s channel capacity theorem. 2 1

Consider the BSC with P(x1)= as shown in fig. Show that mutual
information is I(X,Y)=H(Y)+p log p+(1-p)log(1-p)
(1-p)
2 2 1
p
p

(1-p)

3 Derive the expression for condition of maximum entropy. 2 1


Write a short note on channel capacity of a Discrete memory less
4 2 1
channel.
A source puts out one of five possible messages during each
5 message interval. The probabilities of these messages are 3 1
1/2,1/4,1/8 &1/8. Find information

11. Part A- Question & Answers

S.No Question& Answers BL CO


1 What is entropy?
Ans. Entropy is also called average information per message. It 1 1
is the ratio of total Information to number of messages.
26|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
i.e., Entropy, H =(Total information)/ Number of messages.
2 What is channel redundancy?
Ans. Redundancy (γ) = 1 – code efficiency 1 1
Redundancy should be as low as possible.
3 Name the two source coding techniques?
Ans. source coding techniques are a) prefix coding 1 1
b) Shannon-fano coding c) Huffman coding
4 State any four properties of entropy?
Ans. i) For sure event or impossible event entropy is zero.
ii)For M number of equally likely symbols, entropy is log 2 M
1 1
iii) Upper bound on entropy is H max = log 2 M
iv) Entropy is lower bound on average number of bits per
symbol
5 State the channel coding theorem for a discrete memory less
channel?
Ans. Statement of the theorem:
Given a source of ‘M’ equally likely messages, with M >>1,
which is generating information at a rate.
1 1
Given channel with capacity C. Then if, R ≤ C There exists
a coding technique such that the output of the source
may be transmitted over the channel with a probability of
error in the received message which may be made
arbitrarily small.
6 What is memory less source? Give an example?
Ans. The alphabets emitted by memory less source do not
depend upon previous alphabets. Every alphabet is 1 1
independent. For example a character generated by
keyboard represents memory less source.
7 Explain the significance of the entropy H(X/Y) of a
communication system where X is the transmitter and Y is the
receiver?
Ans. a)H(X/Y) is called conditional entropy. It represents
uncertainty of X, on average, when Y is known. 2 1
b)In other words H(X/Y) is an average measure of
uncertainty in X after Y is received.
c) H(X/Y)Represents the information lost in the noisy
channel
8 What is prefix code?
Ans. In prefix code, no codeword is the prefix of any other
codeword. It is variable length code. The binary digits 2 1
(code words) are assigned to the messages as per their
probabilities of occurrence.
9 What is information theory?
Ans. Information theory deals with the mathematical modelling
2 1
and analysis of a communication system rather than with
physical sources and physical channels.
10 Define bandwidth efficiency?
2 1
Ans. The ratio of channel capacity to bandwidth is called

27|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
bandwidth efficiency.
i.e, Bandwidth efficiency= Channel capacity (C)/Bandwidth(B).

12. Part B- Questions

S.No Question BL CO
1 Derive Shannon’s channel capacity theorem 1 1
2 A zero memory source emits messages x1, x2, x3 with 2 1
probabilities 0.45,0.35 and 0.20 respectively. Find the optimum
(Huffman) binary code for this source as well as for its 2ndOrder
extension (i.e. N=2). Determine code efficiency in each case
3 Write short note on measure of information and entropy 2 1
4 A discrete memoryless source has an alphabet of seven symbols 3 1
with probabilities 0.25,0.25,0.125,0.125,0.125,0.0625,0.0625
resp. Compute Shannan_fano code for this source, calculate
entropy, avg. codeword length & variance of this code
5 Consider a binary input ,output channel shown below 3 1
0.8
X1 Y1
0.2

1
X2 Y2
0.3

X3 0.7
Y3
Find H(X),H(Y),H(X|Y),H(Y|X)and H(XY)

13. Supportive Online Certification Courses


1. Analog Communications By Prof. Goutam Das, conducted by IIT Kharagpur – 12
weeks
2. Analog Communications By By Prof. Surendra Prasad, conducted by IIT Delhi – 12
weeks.

14. Real Time Applications

S.No Application CO
1 Intelligence uses and secrecy applications 5
Information theoretic concepts apply to cryptography and cryptanalysis.
Turing's information unit was used in the Ultra project, breaking the
German Enigma machine code and hastening the end of World War II in
Europe. Shannon himself defined an important concept now called the
unicity distance.
2 Miscellaneous applications 5
28|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
Information theory also has applications in Gambling and information
theory, black holes and bioinformatics.
3 Semiotics 5
Semioticians DoedeNauta and Winfried Nöth both considered Charles
Sanders Peirce as having created a theory of information in his works on
semiotics. Nauta defined semiotic information theory as the study of "the
internal processes of coding, filtering, and information processing."
4 Seismic Exploration 5
One early commercial application of information theory was in the field
of seismic oil exploration. Work in this field made it possible to strip off and
separate the unwanted noise from the desired seismic signal. Information
theory and digital signal processing offer a major improvement of
resolution and image clarity over previous analog methods.
5 Pseudorandom Number Generation 5
Pseudorandom number generators are widely available in computer
language libraries and application programs. They are, almost
universally, unsuited to cryptographic use as they do not evade the
deterministic nature of modern computer equipment and software. A
class of improved random number generators is termed
cryptographically secure pseudorandom number generators, but even
they require random seeds external to the software to work as intended.
These can be obtained via extractors, if done carefully.

15. Contents Beyond the Syllabus


1. Markoff sources
Condition monitoring of critical machine tool components and machining
processes is a key factor to increase the availability of the machine tool and
achieving a more robust machining process Failures in the machining process
and machine tool components may also have negative effects on the final
produced part. Instabilities in machining processes also shortens the life time of
the cutting edges and machine tool
2. Maintenance of Machine tools

A machine tool is a valuable piece of equipment for any small or large business.
Cared for properly, your machine can last for years, but if your machine tool is
neglected, it can add up to a number of costly machine tool repairs. If you want
to extend the life of your equipment, maintenance of Machine tools are very
much essential.

16. Prescribed Text Books & Reference Books


Text Book
1. B.P. Lathi, “Modern Digital and Analog Communications Systems”, 3rd Edition,
Oxford Univ. Press, 2006.
2. John Wiley& Sons Simon Haykin, “Communication Systems”, 3rd Edition, 2010.

29|M T - U N I T - I

BTECH_MEC-SEM 31
SVCE TIRUPATI
References:
1. Simon Haykin, “Communication Systems”, Wiley-India edition, 3rd edition, 2010.
2. Herbert Taub& Donald L Schilling, “Principles of Communication Systems”, Tata
McGraw-Hill, 3rd Edition, 2009.

17. Mini Project Suggestion


1. Analyze any speech signal using open source and find fundamental
parameters
Most of us have heard of Audacity in connection with making ringtones. But it has
many more uses, such as recording live audio from a voice or from musical
instruments, creating audio books, mixing music, and so on.
2. Voice Activated Home Automation
The concept of Home Automation is gaining popularity as it helps in reducing
human effort and errors and thus increasing the efficiency. With the help of Home
Automation system, we can control different appliances like lights, fans, TV, AC
etc. Additionally, a home automation system can also provide other features like
security, alarms, emergency systems etc. can be integrated.
of machine are required.

30|M T - U N I T - I

BTECH_MEC-SEM 31

You might also like