0% found this document useful (0 votes)
29 views37 pages

Unit 3 Different Notes

This document covers Unit-3 of Communication Engineering, focusing on noise in communication systems and the principles of probability and random processes. It discusses deterministic and random signals, basic probability definitions, sample space, events, and various types of random variables, including discrete and continuous. Additionally, it explains probability functions, cumulative distribution functions, and joint probability functions, along with their properties.

Uploaded by

kpopper230
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views37 pages

Unit 3 Different Notes

This document covers Unit-3 of Communication Engineering, focusing on noise in communication systems and the principles of probability and random processes. It discusses deterministic and random signals, basic probability definitions, sample space, events, and various types of random variables, including discrete and continuous. Additionally, it explains probability functions, cumulative distribution functions, and joint probability functions, along with their properties.

Uploaded by

kpopper230
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Communication Engineering Unit-3: Noise in Communication System

UNIT
Probability & Random Process and
Noise in Communication System

SYLLABUS
Review of probability and random process, Gaussian and white noise characteristics, noise
in amplitude modulation systems, noise in frequency modulation systems, pre-emphasis and
de-emphasis, threshold effect in angle modulation.

PROBABILITY, RANDOM SIGNALS & RANDOM PROCESS

3.1 INTRODUCTION
Deterministic signals:
Some fixed mathematical equations described all the signals. Such type of signals is called
deterministic signals. The behavior of such types of signals as well as processing through
linear time-invariant (LTI) systems, can be determined with the help of mathematical models.
Random signals:
There is one other class of signals, the behavior of which cannot be predicted. Such types of
signals are called random signals. Examples of random signals are noise interference in
communication systems. Similarly, in the receiver, the thermal noise is caused by the random
motion of the electrons.

3.1.1 Basic Definitions Related to Probability

Experiment
Ax experiment is defined as the process which is conducted to get some results.
As an example, throwing a coin or throwing a dice.
𝑆 = {𝐻, 𝑇}
𝑺 = {𝟏, 𝟐, 𝟑, 𝟒, 𝟓, 𝟔}
Hence, each experiment or a trial has an outcome and the possibility of this outcome can be
predicted with the help of probability theory
Equally likely: Equally distribution of probability.
Unlikely: Unequal distribution of probability

3.1.2 Sample Space


A set of all possible outcomes of an experiment or trial is called the sample space of that
experiment. 𝑆 generally denotes it. The total number of outcomes in a sample space is denoted
by 𝑛(𝑠). As an example, if three coins are tossed simultaneously, then each experiment has two
outcomes, namely H and T. Hence sample space will be:
𝑺 = {𝑯𝑯𝑯, 𝑯𝑯𝑻, 𝑯𝑻𝑯, 𝑯𝑻𝑻, 𝑻𝑯𝑯, 𝑻𝑯𝑻, 𝑻𝑻𝑯, 𝑻𝑻𝑻}
So that, 𝑛(𝑠) = 8

3.1.3 Event
The expected subset of the sample space or happening is called an event. As an example, let us
consider an experiment of throwing a cubic die. In this case, the sample space 𝑆 will be as
Page | 1
Communication Engineering Unit-3: Noise in Communication System
𝑆 = {1,2,3,4,5,6}
Now, if we want the number ' 3 ' to be an outcome or an even number, i.e., {2,4,6}, then th subset
is called an event.
𝐸 = {2,4,6}
If E = 0 → Null Event, E = S→ Certain Event
If E1 Ո E2 = Null → Mutually Exclusive Events.
If the happening of event E1 does not affect the happing of E2 then events are called independent
events otherwise dependent.

3.1.4 Probability
The chances of occurrence of an event is called probability.
Probability of an event 𝐸
Number of possible favourable outcomes
𝑃(𝐸) =
Total number of outcomes
For example, let us consider the probability of getting an even number in tossing of a die.
𝑆 = {1,2,3,4,5,6}
𝐸 = {2,4,6}
3 1
𝑃(𝐸) = = = 0.5 = 50%
6 2

The properties of probability may be listed as under:

Property 1: 𝑃(𝐸) = 1, if E=S


Property 2: 0 ≤ 𝑃(𝐸) ≤ 1
Property 3: Mutually exclusive events- P(A+B) = P(A)+P(B)
Property 4: 𝑃(𝐴‾) = 1 − 𝑃(𝐴)
Property 5: not mutually exclusive events- 𝑃(𝐴 + 𝐵) = 𝑃(𝐴) + 𝑃(𝐵) − 𝑃(𝐴𝐵)

3.2 CONDITIONAL PROBABILITY


The concept of conditional probability is used in conditional occurrences of events.
The conditional probability of event 𝐵 given that event 𝐴 has already happened.
𝑃(𝐴𝐵)
𝑃(𝐵/𝐴) =
𝑃(𝐴)
where 𝑃(𝐴𝐵) is the joint probability of 𝐴 and 𝐵.
𝑃(𝐴𝐵)
Similarly, 𝑃(𝐴/𝐵) = 𝑃(𝐵)
At this point, it may be noted that the joint probability has a commutative property which states
that 𝑃(𝐴𝐵) = 𝑃(𝐵𝐴)

3.2.1 Probability of Statistically Independent Events


The possibility of the occurrence of event 𝐵 does not depend upon occurrence of event 𝐴, then
these two events 𝐴 and 𝐵 are known as statistically independent events.
𝑃(𝐴𝐵)
𝑃(𝐵/𝐴) =
𝑃(𝐴)
For independent events mathematically,
𝑃(𝐵/𝐴) = 𝑃(𝐵)
𝑃(𝐴𝐵) = 𝑃(𝐴)𝑃(𝐵)
3.2.2 Random Variables
A function that can take on any value from the sample space and its range is some set of real
numbers is called a random variable of the experiment. Random variables are denoted by upper
case letters such as 𝑋, 𝑌 etc. and the values taken by them are denoted by lower case letters with
subscripts such as 𝑥1 , 𝑥2 , 𝑦1 , 𝑦2, etc.
Random variables may be classified as under:
1 Discrete random variables
2 Continuous random variables.

Page | 2
Communication Engineering Unit-3: Noise in Communication System
3.2.3. Discrete Random Variables
A discrete random variable may be defined as a random variable which can take on only a finite
number of values in a finite observation interval.
𝑆 = {𝐻𝐻𝐻, 𝐻𝐻𝑇, 𝐻𝑇𝐻, 𝐻𝑇𝑇, 𝑇𝐻𝐻, 𝑇𝐻𝑇, 𝑇𝑇𝐻, 𝑇𝑇𝑇}
𝑋 = {𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 , 𝑥5 , 𝑥6 , 𝑥7 , 𝑥8 }
N(S) =8
3.2.4. Continuous Random Variables
A random variable that takes on an infinite number of values is called a continuous random
variable. As an example, the noise voltage generated by an electronic amplifier has a continuous
amplitude.

Probability Function or Probability Distribution of a Discrete Random Variable


Let 𝑋 be a discrete random variable and also let 𝑥1 , 𝑥2 , 𝑥3 , … be the values that 𝑋 can take.
Then 𝑃(𝑋 = 𝑥𝑗 ) = 𝑓(𝑥𝑗 ), j = 1, 2, 3, … ….
This 𝑓(𝑥𝑗 ) or simply 𝑓(𝑥) is called the probability function or probability distribution of the
discrete random variable.

3.2.5. Cumulative Distribution Function (CDF)


The Cumulative Distribution Function (𝐶𝐷𝐹) of a random variable 𝑥 may be defined as the
probability that a random variable ' 𝑋 ' takes a value less than or variable x. The cumulative
distribution function (𝐶𝐷𝐹) provides a probabilistic description of a random variable.
Now, according to the definition, the cumulative distribution function (𝐶𝐷𝐹) may be written as
CDF: 𝐹𝑋 (𝑥) = 𝑃(𝑋 ≤ 𝑥), x is dummy variable

3.2.6. Properties of Cumulative Distribution Function (CDF)


The properties of CDF may be listed as under:
Property 1: 0 ≤ 𝐹𝑋 (𝑥) ≤ 1
Property 2: 𝐹𝑋 (−∞) = 0, 𝑖𝑚𝑝𝑜𝑠𝑠𝑖𝑏𝑙𝑒 𝑒𝑣𝑒𝑛𝑡 and 𝐹𝑋 (∞) = 1, certain event
Property 3: 𝐹𝑋 (𝑥1 ) ≤ 𝐹𝑋 (𝑥2 ) if 𝑥1 ≤ 𝑥2
For a discrete random variable, the Cumulative Distribution Function (CDF) for a complete range
of 𝑥 may be expressed as
𝐹𝑋 (𝑥) = 0 for − ∞ ≤ 𝑥 < 𝑥1
𝑛

= ∑ 𝑃(𝑋 = 𝑥𝑗 ) for 𝑥1 ≤ 𝑥 ≤ 𝑥𝑛
𝑗=1
=1 for 𝑥𝑛 < 𝑥 < ∞

3.2.7. Probability Density Function (PDF)


The derivative of cumulative distribution function (CDF) with respect to some dummy variable
known as Probability Density Function (PDF). The probability density function (PDF) is generally
denoted by 𝑓𝑋 (𝑥). Mathematically, PDF may be expressed as
𝑑
PDF: 𝑓𝑋 (𝑥) = 𝐹 (𝑥), where x is dummy variable
𝑑𝑥 𝑋
Properties of Probability Density Function (PDF)
Property 1: 𝑓𝑋 (𝑥) ≥ 0 for all values of 𝑥
Property 2: The area under the PDF curve is always equal to unity. Mathematically.

∫ 𝑓𝑋 (𝑥)𝑑𝑥 = 1
Proof: The expression for PDF is given as
𝑑
𝑓𝑋 (𝑥) = 𝐹 (𝑥)
𝑑𝑥 𝑋
Integrating both sides of above equation, we have

Page | 3
Communication Engineering Unit-3: Noise in Communication System
∞ ∞
𝑑
∫ 𝑓𝑋 (𝑥)𝑑𝑥 =∫ 𝐹𝑋 (𝑥)] 𝑑𝑥
[
−∞ −∞ 𝑑𝑥
= [𝐹𝑋 (𝑥)]∞
−∞
= [𝐹𝑋 (∞) − 𝐹𝑋 (−∞)] = 1 − 0 = 1
Property 3: The cumulative distribution function (CDF) may be obtained by integrating
probability density function (PDF). Mathematically,
𝑥
𝐹𝑋 (𝑥) = ∫ 𝑓𝑋 (𝑥)𝑑𝑥
−∞
Proof: The expression for Probability Density Function (PDF) is given as
𝑑
𝑓𝑋 (𝑥) = 𝐹𝑋 (𝑥)
𝑥
𝑑𝑥
𝑥
𝑑
∫ 𝑓𝑋 (𝑥)𝑑𝑥 = ∫ [ 𝐹𝑋 (𝑥)] 𝑑𝑥
−∞ −∞ 𝑑𝑥
𝑥
𝑥
∫ 𝑓𝑋 (𝑥)𝑑𝑥 = [𝐹𝑋 (𝑥)]−∞ = [𝐹𝑋 (𝑥) − 𝐹𝑋 (−∞)]
−∞
= 𝐹𝑋 (𝑥) − 0 = 𝐹𝑋 (𝑥)
𝑥
Hence, we have 𝐹𝑋 (𝑥) = ∫−∞ 𝑓𝑋 (𝑥)𝑑𝑥 Hence Proved.
𝑥
Property 4: 𝑃(𝑥1 < 𝑋 ≤ 𝑥2 ) = ∫𝑥 2 𝑓𝑋 (𝑥)𝑑𝑥
1

3.2.8. Joint Probability Function


The joint Distribution Function is defined as the Distribution Function or joint CDF 𝐹𝑋𝑌 (𝑥, 𝑦) of
two random variables 𝑋 and 𝑌 is that the probability that the or joint CDF 𝐹𝑋𝑌 (𝑥, 𝑦) of two random
to a specified value 𝑥 and the joint variable 𝑦 is less than or equal to a specified value 𝑦. The joint
Cumulative Distribution Function 𝐹𝑋𝑌 (𝑥, 𝑦) may be defined as:
𝐹𝑋𝑌 (𝑥, 𝑦) = 𝑃(𝑋 ≤ 𝑥, 𝑌 ≤ 𝑦)
Properties of Joint Cumulative Distribution Function
Property 1: 𝐹𝑋𝑌 (𝑥, 𝑦) ≥ 0
Property 2: Monotone non-decreasing function of both 𝑥 and 𝑦.
Property 3: Always continuous everywhere in the xy plane.

3.2.9. The Joint Probability Density Function


The Joint Probability Density Function or simply Joint PDF is the PDF of two or more random
variables
The joint PDF of any two random variables 𝑋 and 𝑌 may be defined as the partial derivative of the
joint cumulative distribution function 𝐹𝑋𝑌 (𝑥, 𝑦) with respect to the dummy variables 𝑥 and 𝑦
mathematically.
∂2 𝐹𝑋𝑌 (𝑥, 𝑦)
𝑓𝑋𝑌 (𝑥, 𝑦) =
∂𝑥 ∂𝑦

3.2.10. Properties of Joint PDF


Property 1: 𝑓𝑋𝑌 (𝑥, 𝑦) ≥ 0
Property 2: The total volume under the surface of joint PDF is equal to unity.
Mathematically.
∬ 𝑓𝑥𝑦 (𝑥, 𝑦)𝑑𝑥𝑑𝑦 = 1
Because 𝑓𝑋𝑌 (𝑥, 𝑦) is a two-dimensional function, it represents a surface in the 𝑥, 𝑦 plane. Hence
its integration is called as volume under the surface.
Property 3: The Joint PDF is continuous everywhere because joint CDF is continuous.
𝑦2 𝑥2
𝑃(𝑥1 < 𝑋 ≤ 𝑥2 , 𝑦1 < 𝑌 ≤ 𝑦2 ) = ∫ ∫ 𝑓𝑋𝑌 (𝑥, 𝑦)𝑑𝑥𝑑𝑦
𝑦1 𝑥1
In above expression, the double integral represents the volume between 𝑥 − 𝑦 plane and the
surface 𝑓𝑥 (𝑥, 𝑦)

Page | 4
Communication Engineering Unit-3: Noise in Communication System
Now, the two random variables 𝑋 and 𝑌 are statistically independent, then joint PDF of these two
variables becomes a product of two separate PDFs. Mathematically, two random variabes
𝑓𝑋𝑌 (𝑥, 𝑦) = 𝑓𝑋 (𝑥)𝑓𝑌 (𝑦)
Putting this value of 𝑓𝑋𝑌 (𝑥, 𝑦) from equation in equation, we have
𝑦2 𝑥2
𝑃(𝑥1 < 𝑋 ≤ 𝑥2 , 𝑦1 < 𝑌 ≤ 𝑦2 ) =∫ ∫ 𝑓𝑋 (𝑥)𝑓𝑌 (𝑦)𝑑𝑥𝑑𝑦
𝑦1 𝑥1
The above equation provides the relationship between probability PDF of statistically
independent random variables 𝑋 and 𝑌.

3.2.11. Marginal Densities


When we are dealing with two random variables then the individual probability densities 𝑓𝑋 (𝑥)
and 𝑓𝑌 (𝑦) can be obtained from the joint PDF. These individual densities are called marginal PDFs.

𝑓𝑋 (𝑥) = ∫ 𝑓𝑋𝑌 (𝑥, 𝑦)𝑑𝑥𝑑𝑦
−∞

𝑓𝑌 (𝑦) = ∫ 𝑓𝑋𝑌 (𝑥, 𝑦)𝑑𝑥
−∞

3.2.12. Conditional Probability Density Function


If the random variables X and Y are not independent, then the dependence of X and Y is expressed
by the conditional PDF as:
Then we may find the conditional, 𝑃𝐷𝐹 of 𝑌 given that 𝑋 = 𝑥 as,
𝑓𝑋𝑌 (𝑥, 𝑦)
𝑓𝑌 (𝑦/𝑋 = 𝑥) =
𝑓𝑋 (𝑥)
Similarly, we may find the conditional 𝑃𝐷𝐹 of 𝑋 given that 𝑌 = 𝑦. Then
𝑓𝑋𝑌 (𝑥, 𝑦)
𝑓𝑋 (𝑥/𝑌 = 𝑦) =
𝑓𝑌 (𝑦)
where 𝑓𝑋 (𝑥) and 𝑓𝑌 (𝑦) is the marginal density of random variable 𝑋 𝑎𝑛𝑑 𝑌

3.2.13. Properties of Conditional PDF


𝑓 (𝑥/𝑦) ≥ 0
Property 1: 𝑋
𝑓𝑌 (𝑦/𝑥) ≥ 0
∞ ∞
Property 2: ∫−∞ 𝑓𝑋 (𝑥/𝑦)𝑑𝑥 = 1 and ∫−∞ 𝑓𝑌 (𝑦/𝑥)𝑑𝑦 = 1
Property 3: for statistically independent events 𝑓𝑌 (y/x) = 𝑓𝑌 (𝑦) and 𝑓𝑋 (𝑥/𝑦) = 𝑓𝑋 (𝑥)
Statistical Averages of Random Variables
Mean or Average:
The mean or 𝑋 weighted by their probabilities. Mean value of a random variable is denoted by
𝑚𝑥 ′ Mean value is also known as expected value of random variable 𝑋.
Arithmatic sum of all values of 𝑋
𝑚𝑥 = 𝐸[𝑋] =
Total number of values of 𝑋
T [ ] represents expectation operator.

3.2.14. Mean Value of Discrete Random Variable


Let the discrete random variable 𝑋 be take the following values
𝑋 = {𝑥1 , 𝑥2 , 𝑥3 … 𝑥𝑛 }

𝑃 = {𝑝(𝑥1 ), p(𝑥2 ), p(𝑥3 ) … 𝑝(𝑥𝑛 )}


Then the mean or average value 𝑚𝑥 is expressed as
𝑚𝑥 = 𝑥1 𝑃(𝑥1 ) + 𝑥2 𝑃(𝑥2 ) + 𝑥3 𝑃(𝑥3 ) + ⋯ + 𝑥𝑛 𝑃(𝑥𝑛 )
Hence, mean value of discrete random variable
𝑛

𝑚𝑥 = 𝐸(𝑋) = 𝑋‾ = ∑ 𝑥𝑖 ⋅ 𝑃(𝑥𝑖 )
𝑖=1
Here 𝑋‾ is also the notation for mean values

Page | 5
Communication Engineering Unit-3: Noise in Communication System
3.2.15. Mean Values of Continuous Random Variables
If random variable 𝑋 becomes continuous, the sample points 𝑥1 , 𝑥2 , 𝑥3 … , 𝑥𝑛 becomes quite close
to each other such that (𝑥2 − 𝑥1 ≅ 0 ).
For continuous random variable 𝑋, the mean or average value is expressed as

𝑚𝑥 = ∫ 𝑥𝑓𝑋 (𝑥)𝑑𝑥
−∞
where 𝑓𝑋 (𝑥) = Probability density function (PDF)
Also, if a function 𝑔(𝑥) transform X into other function, then mean average of 𝑔(𝑥) may be
expressed as
𝑚 = ̅̅̅̅̅̅
𝑔(𝑥) = 𝐸[𝑔(𝑥)]

𝑚=∫ 𝑔(𝑥)𝑓𝑋 (𝑥)𝑑𝑥
−∞

3.3. Moments and Variance


The 𝑛𝑡ℎ moment of any random variable 𝑋 may be defined as the mean value of 𝑋 𝑛 i.e.
𝑔(𝑥) = ̅̅̅̅
𝑋𝑛

̅̅̅̅
𝑋𝑛 = ∫ 𝑥 𝑛 𝑓𝑋 (𝑥)𝑑𝑥 = 𝐸(𝑋 𝑛 )
−∞

̅̅̅̅
𝑋𝑛 = ∫ 𝑥 𝑛 𝑓𝑋 (𝑥)𝑑𝑥
−∞

First moment of random variable 𝑿 (Mean value)


Putting 𝑛 = 1,

𝑋‾ = 𝐸(𝑋) = ∫ 𝑥𝑓𝑋 (𝑥)𝑑𝑥
−∞
𝑋‾ = 𝑚𝑥

Second moment of random variable 𝑿 (Mean square value)


Putting 𝑛 = 2,

̅̅̅̅
𝑋 2 = 𝐸(𝑋 2 ) = ∫ 𝑥 2 𝑓𝑋 (𝑥)𝑑𝑥
−∞
where ̅̅̅̅
𝑋 2 is known as mean square value of random variable 𝑋.

3.3.1. Central moments


Similarly, the central moments are the moments of the difference between random variable 𝑋 and
its mean 𝑚𝑥 .
𝒏th central moment
Therefore, the 𝑛th central moment may be given as

𝐸[(𝑋 − 𝑚𝑥 )𝑛 ] = ∫ (𝑥 − 𝑚𝑥 )𝑛 𝑓𝑋 (𝑥)𝑑𝑥
−∞
The second central moment (Variance)
The second central moment for 𝑛 = 2, is known as variance of random variable 𝑋 i.e.,

Variance [𝑋] = 𝐸[(𝑋 − 𝑚𝑥 )2 ] = ∫−∞ (𝑥 − 𝑚𝑥 )2 𝑓𝑋 (𝑥)𝑑𝑥
Hence, it is clear that the variance provides an indication about randomness of the random
variable.
Variance is generally represented by 𝜎𝑥2 i.e.,
𝜎𝑥2 = Variance(𝑋)
𝜎𝑥2 = Var(𝑋)
𝜎𝑥2 = 𝐸[(𝑋 − 𝑚𝑥 )2 ]

𝜎𝑥2 = ∫ (𝑥 − 𝑚𝑥 )2 𝑓𝑋 (𝑥)𝑑𝑥
−∞
Expanding the above
𝜎𝑥2 = 𝐸[𝑋 2 − 2𝑚𝑥 𝑋 + 𝑚𝑥2 ]
Page | 6
Communication Engineering Unit-3: Noise in Communication System
The 𝐸 [ ] operator represents mean or average value, therefore it is linear. Thus,
𝜎𝑥2 = 𝐸(𝑋 2 ) − 2𝑚𝑥 𝐸(𝑋) + 𝑚𝑥2
𝜎𝑥2 = 𝐸(𝑋 2 ) − 2𝑚𝑥 𝑚𝑥 + 𝑚𝑥2
𝜎𝑥2 = 𝐸(𝑋 2 ) − 𝑚𝑥2
𝜎𝑥2 = mean square value - square of mean value
= ̅̅̅̅
𝑋 2 − 𝑚𝑥2
Therefore,
Standard deviation = √ Variance = √𝜎𝑥2
Thus, standard deviation = 𝜎𝑥
= √𝐸(𝑋 2 ) − 𝑚𝑥2

3.3.2. Random Process


When any random experiment 𝑬 is given a time dimension, then each outcome appears at some
certain time and the random experiment will be converted to Random Process. A random process
is the function of two variables 𝜆 and 𝑡. Then the function 𝑋(𝜆, 𝑡) is known as random process.
As an example, let us consider a set of voltage waveforms generated by thermal electron motion
a large number of identical resistors.

3.3.3. Ensemble and Time Averages


Random processes cannot be completely specified with the help of PDFs. Generally, ensemble and
time statistics are used to specify the random processes. Most generally, statistical averages such
as mean 𝑚𝑥 (𝑡) and autocorrelation function 𝑅𝜏 (𝑡1 , 𝑡2 ) are used to describe random process

Ensemble Averages
In case of ensemble averages, the average is taken over the ensemble of waveforms, keeping the
time constant or fixed.
Hence, Ensemble 𝑚𝑥 (𝑡) = 𝐸[𝑋(𝑣, 𝑡]

𝑚𝑥 (𝑡) = ∫ 𝑥𝑓𝑋 (𝑥, 𝑡)𝑑𝑥
−∞
have to take mean of 𝑣1 (𝑡1 ), 𝑣2 (𝑡1 ), 𝑣3 (𝑡1 ) … , 𝑣𝑖 (𝑡1 )
𝑹𝜏 (𝑡1 , 𝑡2 ) = 𝐸[𝑋(𝑣, 𝑡1 ) ⋅ 𝑋(𝑣, 𝑡2 )]
= 𝐸[𝑋(𝑡1 ) ⋅ 𝑋(𝑡2 )]
∞ ∞
=∫ ∫ 𝑥1 𝑥2 𝑓𝑥1 𝑓𝑥2 (𝑥1 , 𝑥2 )𝑑𝑥1 𝑑𝑥2
−∞ −∞
In this equation, the value of 𝑅𝜏 (𝑡1 , 𝑡2 ) represents the autocorrelation. Thus the autocorrelation
function gives the information about frequency content of the process.

Time Averages
When the statistical averages are taken along the time; they are known as time averages. As
example, we may define time mean value of a sample function 𝑥(𝑡) as
1 𝑇/2
Time mean value: ⟨𝑚𝑥 ⟩ = lim 𝑇→∞ 𝑇 ∫−𝑇/2 𝑋(𝑡)𝑑𝑡
The autocorrelation function may be expressed using time averaging as

Page | 7
Communication Engineering Unit-3: Noise in Communication System

1 𝑇/2
< 𝑅𝜏 (𝜀) >= lim ∫ 𝑥(𝑡) ⋅ 𝑥(𝑡 + 𝜏)𝑑𝑡
𝑇→∞ 𝑇 −𝑇/2

3.3.4. Stationary and Non-Stationary Random Process


A random process 𝑋(𝑡) is called stationary if its statistics are not affected by any shift in the time
origin. We may define stationary process in terms of ensemble averages as:
(i) The ensemble mean is independent of time. Mathematically,
𝑚𝑥 (𝑡) = 𝑚𝑥 (𝑡1 ) = 𝑚𝑥 (𝑡2 ) = 𝑚𝑥 (𝑡3 ) = ⋯ = constant at all the time instants
(ii) The autocorrelation function 𝑅𝜏 (𝑡1 , 𝑡2 ) depends only upon the time difference 𝑡2 − 𝑡1 .
𝑅𝜏 (𝑡1 , 𝑡2 ) = 𝑅𝜏 (𝑡1 + 𝑡, 𝑡2 + 𝑡)
Therefore, the autocorrelation function of any stationary process is a function of time-difference
and is given as
𝑅𝜏 (𝑡1 , 𝑡2 ) = 𝑅𝜏 (𝑡2 − 𝑡1 )
= 𝑅𝜏 (𝜌)
𝜌 = 𝑡2 − 𝑡1

3.3.5. Wide Sense Stationary Process (Weakly Stationary Process)


The process may not be stationary in strict sense, still the mean and autocorrelation function are
independent of shift of time region. Such type of process is known as wide sense stationary
process.
All the processes in practice, are non-stationary since every process has some start and end. This
means that the statistics of such process are since every process has some start and proce should
start at 𝑡 = −∞ and should not stop till 𝑡 = ∞ dependent upon time. Such type of process is not
possible practically. This process may appear stationary over a certain period of time. Then this
will be side sense stationary process.

3.3.6. Ergodic Process


A random process is known as ergodic process if the time average are equal to ensemble averages.
Hence for an ergodic process we have
mx = < mx >
𝑅𝜏 (𝑡1 , 𝑡2 ) =< 𝑅𝜏 (𝜀) >
Ergodicity of the process may be defined in terms of some statistical averages like mean and
autocorrelation.
The random process is ergodic in the mean if
𝑚𝑥 = ⟨𝑚𝑥 ⟩
and variance of < 𝑚𝑥 >→ 0 as 𝑇 → ∞
Similarly, the random process is ergodic in the autocorrelation if
𝑅𝜏 (𝑡1 , 𝑡2 ) =< 𝑅𝜏 (𝜀) >
and variance of ⟨𝑅𝜏 (𝜀) >→ 0 as 𝑇 → ∞

3.3.7. Gaussian Process


Let us consider a random process denoted by 𝑋(𝑡) for an interval which starts at a time 𝑡 = 0 and
lasts until 𝑡 = 𝑇.
Now, let us weigh the random process 𝑋(𝑡) by any function 𝑔(𝑡) and then integrate the product
𝑋(𝑡)𝑔(𝑡) over this observation time-interval (0 ≤ 𝑡 ≤ 𝑇).
Thus, Gaussian we obtain a random variable 𝑌 defined as
𝑇
𝑌 = ∫ 𝑋(𝑡)𝑔(𝑡)𝑑𝑡
0
where 𝑌 is a linear function of 𝑋(𝑡). It is finite.
Now, if the random variable 𝑌 is a Gaussian distributed random variable for each value of a
Gaussion 𝑔(𝑡) then the process 𝑋(𝑡) is known as a Gaussian process. Also, a random variable 𝑌
has Gaussian distribution if its probability density function has the following form:
1 2 2
𝑓𝑌 (𝑦) = 𝑒 {[−(𝑦−𝑚𝑌) ]/2𝜎𝑌}
√2𝜋𝜎𝑦

Page | 8
Communication Engineering Unit-3: Noise in Communication System
Figure exhibits a plot of Gaussian distributed probability density function (pdf) for 𝑚𝑌 = 0 and
𝜎𝑌2 = 1.
It may be noted that a Gaussian process has two main features or advantages:
(i) Firstly, the Gaussian process has many properties which make analytic results possible.
(ii) Secondly, the random process produced by physical phenomena are often such that a
Gaussian model is an appropriate one

Figure 1: Normalised Gaussian Process

A Gaussian process has the following important properties:


(i) If a Gaussian process 𝑋(𝑡) is made to apply to a stable linear filter, then the random process
𝑌(𝑡) produced at the output of the filter is also Gaussian.
(ii) Let us consider the samples of random process 𝑋(𝑡1 ), 𝑋(𝑡2 ), … . , 𝑋(𝑡𝑛 ), obtained by observing
random process 𝑋(𝑡) at time 𝑡 = 𝑡1 , 𝑡 = 𝑡2 , … , 𝑡 = 𝑡𝑛
Now, if the random process 𝑋(𝑡) is Gaussian, then this set of random variables are jointly Gaussian
for any value of 𝑛, with their 𝑛-fold joint 𝑝𝑑𝑓 being completely determined by specifying the set
of means.
Here, means used for more than one mean (or average), i.e.
𝑚𝑋 (𝑡𝑖 ) = 𝐸[𝑋(𝑡𝑖 )], 𝑖 = 1,2, … , 𝑛
and the set of autocorrelation function
for
𝐶𝑋 (𝑡𝑗 , 𝑡𝑖 ) = 𝐸[{𝑋(𝑡𝑗 ) − 𝑚𝑋 (𝑡𝑗 )}{𝑋(𝑡𝑖 ) − 𝑚𝑋 (𝑡𝑖 )}]
𝑗 = 1,2,3 … , 𝑛 in the strict sense.
If the set of random variables 𝑋(𝑡1 ), 𝑋(𝑡2 ) … , 𝑋(𝑡𝑛 ), obtained by sampling a Gaussian process 𝑋(𝑡)
at times 𝑡 = 𝑡1 , 𝑡 = 𝑡2 , …… and are not correlated, then this set of random variables are
statistically independent.

3.3.8. Sum of Random Processes


Now, we shall discuss the sum of two random processes.
Let us consider two WSS processes 𝑋(𝑡) and 𝑌(𝑡) with zero means. The sum of random processes
in terms of random process denoted as 𝑍(𝑡).
𝑍(𝑡) = 𝑋(𝑡) + 𝑌(𝑡)
and the autocorrelation function of 𝑍(𝑡) be found as
𝑅𝑍 (𝜏) = 𝐸[𝑍(𝑡)𝑍(𝑡 − 𝜏)]
= 𝐸[{𝑋(𝑡) + 𝑌(𝑡)}{𝑋(𝑡 − 𝜏)𝑌(𝑡 − 𝜏)}]
or 𝑅𝑍 (𝜏) = 𝐸[𝑋(𝑡)𝑋(𝑡 − 𝜏)] + 𝐸[𝑋(𝑡)𝑌(𝑡 − 𝜏)] + 𝐸[𝑋(𝑡 − 𝜏)𝑌(𝑡)) + 𝐸[𝑡(𝑡)𝑌(𝑡) + 𝑋(𝑡 −
𝜏)𝑌(𝑡) + 𝑌(𝑡)𝑌(𝑡)
or 𝑅𝑍 (𝜏) = 𝑅𝑋 (𝜏) + 𝑅𝑋𝑌 (𝜏) + 𝑅𝑋𝑋 (𝜏) + 𝑅𝑋 (𝜏)] + 𝐸[𝑋(𝑡 − 𝜏)𝑌(𝑡)] + 𝐸[𝑌(𝑡)𝑌(𝑡 − 𝜏)]
The power spectral density (psd) of rand
𝑆𝑍 (𝜔) = 𝑆𝑋 (𝜔) + 𝑆𝑋𝑌 (𝜔) + 𝑆𝑌𝑋 (𝜔) + 𝑆𝑌 (𝜔)
𝑆𝑍 (𝜔) = 𝑆𝑋 (𝜔) + 𝑆𝑌 (𝜔) … non − correlated

3.3.9. Correlation Function


The correlation function provides a measure of similarity or coherence between a given signal
(process) and a replica of the same signal or other signal (process) by a variable amount.

Page | 9
Communication Engineering Unit-3: Noise in Communication System
Autocorrelation Function
Autocorrelation function may be defined as a measure of similarity between a signal or process
and its replica by a variable amount.
The autocorrelation function of a stationary process 𝑋(𝑡) may be defined as
𝑅𝑋(𝑡𝑗−𝑡𝑖) = 𝐸[𝑋(𝑡𝑗 )𝑋(𝑡𝑖 )] for any 𝑡𝑗 and 𝑡𝑖
where 𝑋(𝑡𝑗 ) and 𝑋(𝑡𝑖 ) are the random variables obtained by observing the process 𝑋(𝑡) at times
𝑡𝑗 and 𝑡𝑖 , respectively.
𝑅𝑋 (𝜏) = 𝐸[𝑋(𝑡)𝑋(𝑡 − 𝜏)]
The expressions for 𝑋(𝑡) and 𝑋(𝑡 − 𝜏) are viewed as random variables. The variable 𝜏 is known
as time-lag or time-delay parameter. The autocorrelation function for stationary random process
is independent of a shift of time origin.

Properties of Autocorrelation Function of Random Process


Property 1: The autocorrelation function of WSS process 𝑋(𝑡) is always an even function of time-
lag. Correlation function 𝑅𝑋 (𝜏) satistifies the following mathematical relationship, i.e.
𝑅𝑋 (𝜏) = 𝑅𝑋 (−𝜏)
Property 2: The mean square value of a WSS process.
𝑅𝑋 (0) = 𝑅𝑋 (𝜏)|𝜏=0 = 𝐸[𝑋(𝑡)𝑋(𝑡 − 𝜏)]|𝜏=0
or 𝑅𝑋 (0) = 𝐸[𝑋(𝑡)𝑋(𝑡 − 0)] = 𝐸[𝑋 2 (𝑡)]
or 𝑅𝑋 (0) = 𝐸[𝑋 2 (𝑡)] or
Property 3: |𝑅𝑋 (𝜏)| = magnitude of 𝑅𝑋 (𝜏) = 𝑅𝑋 (0) non-negative, i.e.
𝐸[{𝑋(𝑡) − 𝑋(𝑡 − 𝜏)}2 ] ≥ 0
or 𝐸[𝑋 2 (𝑡) − 2𝑋(𝑡)𝑋(𝑡 − 𝜏) + 𝑋 2 (𝑡 − 𝜏)] ≥ 0
Since expectation operation is a linear operation, we can write above equation as
𝐸[𝑋 2 (𝑡)] − 2𝐸[𝑋(𝑡)𝑋(𝑡 − 𝜏)] + 𝐸[𝑋 2 (𝑡 − 𝜏)] ≥ 0
Also, for a WSS random process, we have
𝐸[𝑋 2 (𝑡)] = 𝐸[𝑋 2 (𝑡 − 𝜏)] = 𝑅𝑋 (0)
and 𝐸[𝑋(𝑡)𝑋(𝑡 − 𝜏)] = 𝑅𝑋 (𝜏)

Fig.1. Illustration of autocorrelation functions of slowly and rapidly varying random processes.

Cross-Correlation Functions
An autocorrelation function is determined for single random process but cross correlation
function is determined for two random processes X(t) and Y(t). There will be two cross-
correlation
𝑹𝑋𝑌 (𝑡, 𝑢) = 𝐸[𝑋(𝑡)𝑌(𝑢)]
𝑹𝑌𝑋 (𝑡, 𝑢) = 𝐸[𝑌(𝑡)𝑋(𝑢)]
𝑅 (𝑡, 𝑢) 𝑅𝑋𝑌 (𝑡, 𝑢)
𝑅(𝑡, 𝑢) = [ 𝑋 ]
𝑅𝑌𝑋 (𝑡, 𝑢) 𝑅𝑌 (𝑡, 𝑢)
𝑅(𝑡, 𝑢) is called the correlation matrix of
𝑅 (𝑡 − 𝑢) 𝑅𝑋𝑌 (𝑡 − 𝑢)
𝑅(𝑡, 𝑢) = [ 𝑋 ]
𝑅𝑌𝑋 (𝑡 − 𝑢) 𝑅𝑌 (𝑡 − 𝑢)
It may be noted that in general, the cross-correlation function is not an even function of 𝜏 like an
It marrelation function. It is having following symmetry relation
𝑅𝑋𝑌 (𝜏) = 𝑅𝑌𝑋 (−𝜏)
Moreover, cross-correlation does not have a maximum at the origin like the autocorrelation
function. The two random processes 𝑋(𝑡) and 𝑌(𝑡) are said to be incoherent or orthogonal if
crosscerrelation function of 𝑋(𝑡) and 𝑌(𝑡) is zero.
Page | 10
Communication Engineering Unit-3: Noise in Communication System
𝑅𝑋𝑌 (𝜏) = 0
The two random processes are said to be non-correlated, if their cross-correlation functions
𝑅𝑋𝑌 (𝜏) are equal to the multiplication of mean values.

3.4. SPECTRAL DENSITIES


Spectral densities are used to represent random process in frequency-domain. Frequency-
domain methods are very powerful tools for analysis of various types of signals.
Power Spectral Density (psd)
Power spectral density 𝑆𝑋 (𝜔) of a wide-sense stationary (WSS) random process 𝑋(𝑡) may be
defined as
𝑆𝑋 (𝜔) = Fourier transform {𝑅𝑋 (𝜏)}

=∫ 𝑅𝑋 (𝜏)𝑒 −𝑗𝜔𝜏 𝑑𝜏
−∞
Where 𝑅𝑋 (𝜏) = Autocorrelation function of random process 𝑋(𝑡).
The cross power Spectral Density (CPSD).
It is defined as power spectral density of two jointly WSS random processes 𝑋(𝑡) and 𝑌(𝑡) may
be
𝑆𝑋𝑌 (𝜔) = Fourier transform {𝑅𝑋𝑌 (𝜏)}

=∫ 𝑹𝑋𝑌 (𝜏)𝑒 −𝑗𝜔𝜏 𝑑𝜏
−∞
where 𝑅𝑋𝑌 (𝜏) = cross-correlation function of random processes 𝑋(𝑡) and 𝑌(𝑡).

Properties of Power Spectral Density


Power spectral density (psd) of a random process has the following main properties.
Property 1: Power spectral density of a random process is a real function of frequency 𝜔
Proof: We know that the autocorrelation function of a random process is an even function, i.e.
𝑹𝑋 (𝜏) = 𝑹𝑋 (−𝜏)

Therefore, 𝑆𝑋 (𝜔) = ∫−∞ 𝑅𝑋 (𝜏)𝑒 −𝑗𝜔𝜏 𝑑𝜏

or 𝑆𝑋 𝑋 (𝜔) = ∫−∞ 𝑅𝑋 (𝜏)[cos 𝜔𝜏 − 𝑗sin 𝜔𝜏]𝑑𝜏
∞ ∞
𝑆𝑋 (𝜔) = ∫ 𝑅𝑋 (𝜏)cos 𝜔𝜏𝑑𝜏 − 𝑗 ∫ 𝑅𝑋 (𝜏)sin 𝜔𝜏𝑑𝜏
−∞ −∞

Now let ∫−∞ 𝑅𝑋 (𝜏)cos 𝜔𝜏𝑑𝜏 = 𝐼
Substituting 𝑡 = −𝜏 in equation , we get

∫ 𝑅𝑋 (−𝑡)sin(−𝜔𝑡)(−𝑑𝑡) = 𝐼
−∞

or − ∫−∞ 𝑅𝑋 (−𝑡)sin 𝜔𝑡𝑑𝑡 = 𝐼
Using the even property of autocorrelation function, we obtain

−∫ 𝑅𝑋 (𝜏)sin 𝜔𝜏𝑑𝜏 = 𝐼
−∞
After adding equations we get

Substituting ∫−∞ 𝑅𝑋 (𝜏)sin 𝜔𝜏𝑑𝜏 = 0 in equation we get

𝑆𝑋 (𝜔) = ∫ 𝑅𝑋 (𝜏)cos 𝜔𝜏𝑑𝜏
−∞
Property 2: 𝑆𝑋 (𝜔) = 𝑆𝑋 (−𝜔)
Property 3: 𝑆𝑋 (𝜔) ≥ 0 for all 𝜔

3.4.1. Energy Spectral Density (ESD)


The energy spectral density (psd) 𝜓𝑋 (𝜔) may be defined as a measure of density of the energy
contained in the random process 𝑋(𝑡) in Joules per Hertz. It may be noted that since the amplitude
spectrum of a real-valued random process 𝑋(𝑡) is an even function of 𝜔, the energy spectral
density of spectrum signal is symmetrical about the vertical axis passing through the origin.
Thus, total energy of the random process 𝑋(𝑡) is defined as

Page | 11
Communication Engineering Unit-3: Noise in Communication System
1 ∞
𝐸= ∫ Ψ (𝜔)𝑑𝜔
2𝜋 −∞ 𝑋

3.4.2. Uniform Distribution


𝐴 𝐴
Uniform PDF: 𝑓𝑋 (𝑥) = 0 for 𝑥 < 𝑚 − 2 & 𝑥 > 𝑚 + 2
1 𝐴 𝐴
= for (𝑚 − ) ≤ 𝑥 ≤ (𝑚 + )
𝐴 2 2
Figure shows the PDF of a uniformly distributed random variable.

Fig. PDF of a uniformly distributed random variable

3.4.3. Gaussian or Normal Distribution


Gaussian distribution is also known as Normal Distribution. It is defined for a continuous random
variable. The PDF for a Gaussian random variable is expressed as
1 2 2
𝑓𝑋 (𝑥) = 𝑒 −(𝑥−𝑚) /2𝜎
𝜎√2𝜋
In this equation
m = mean value of the random variable
𝜎 2 = Variance of the random variable

3.4.4 Properties of Gaussian PDF


1
Property 1: 𝑓𝑋 (𝑥) = 𝜎 2𝜋 at 𝑥 = 𝑚 (mean value)

Property 2: The plot of Gaussian 𝑃𝐷𝐹 exhibit even symmetry around mean value.
𝑓𝑋 (𝑚 − 𝜎) = 𝑓𝑋 (𝑚 + 𝜎)
Property 3: The area under the PDF curve is 1/2 for all values of 𝑥 below mean value and 1/2 for
all values of 𝑥 above mean value. Mathematically,
1
𝑃(𝑋 ≤ 𝑚) = 𝑃(𝑋 > 𝑚) =
2
Property 4: As 𝜎 → 0, the Gaussian function approaches to 𝛿 (impulse) function located at 𝑥 =
𝑚. This is because the area under the PDF curve is always one (unity). Also, the area of impulse
function

3.4.5. Rayleigh Distribution


Rayleigh Distribution is used for continuous random variables. It is produced from two Gaussian
random variables. Let 𝑋 and 𝑌 be independent Gaussian random variables having mean value
zero and variance 𝜎 2 . Mathematically,
𝑚𝑥 = 𝑚𝑦 = 0
𝜎𝑥2 = 𝜎 2

Page | 12
Communication Engineering Unit-3: Noise in Communication System
It may be observed that independent Gaussian from variables 𝑋 and 𝑌 are related to Rayleigh
Distribution random variables 𝑅 and 𝜙 such that
𝑅 = √𝑋 2 + 𝑌 2
𝑌
𝜙 = tan−1 ( )
𝑋

Fig. 1. Rectangular to polar conversion

After transformation of Gaussian random variables 𝑋 and 𝑌 into 𝑅, it is found that 𝑅 is having
Rayleigh Probability Density Function (PDF) which is expressed as
𝑟 2 2
𝑓𝑅 (𝑟) = 2 𝑒 −𝑟 /2𝜎
𝜎
Therefore, from 𝑃𝐷𝐹 curve, it is clear that
𝑓𝑅 (𝑟) = 0 for 𝑟 < 0 … (5.94)
𝑟 −𝑟2 /2𝜎2
𝑓𝑅 (𝑟) = 2 𝑒 for 𝑟 ≥ 0 ...(5.95)
𝜎
This means that 𝑅 has always positive value and never goes for negative value.

Rayleigh Distribution is always used for modelling of statistics of signals transmitted through
radio channels such as cellular radio.

3.5. NOISE
Noise is an unwanted signal. Noise is random in nature and interferes with the desired signals.
Noise disturb the proper reception and reproduction of transmitted signals.

Classification of noise:

Fig. 1. Classification of noise

Page | 13
Communication Engineering Unit-3: Noise in Communication System
3.5.1. Noise figure
When noise factor is expressed in decibels, it is called noise figure.
Noise figure = 10log10 (𝐹)
𝑆/𝑁 at the input " (𝑆/𝑁)′′
𝑖
= 10log10 [ ]
𝑆/𝑁 at the output "(𝑆/𝑁)′′0
(𝑆/𝑁)𝑖
= 10log10 [ ]
(𝑆/𝑁)0
Noise figure (𝐹)dB = 10log10 (𝑆/𝑁)𝑖 − 10log10 (𝑆/𝑁)0
The Ideal value of Noise figure is 0 dB.

Problem 1.
The signal power and noise power measured at the input of an amplifier are 150𝜇W and 1.5𝜇W
respectively. If the signal power at the output are 1.5 W and noise power is 40 mW, Calculate the
amplifier noise factor and noise figure.
Solution:
Given: 𝑃𝑆𝑖 = 150𝜇W, 𝑃𝑛𝑖 = 1.5𝜇W, 𝑃𝑆0 = 1.5 W, 𝑃𝑛0 = 40 mW.
• Noise factor ' 𝐹 '
𝑷𝐒𝐢 𝑷𝒏𝟎
𝑭 = ×
𝑷𝒏𝒊 𝑷𝑺𝟎
150 × 10−6 40 × 10−3
= ×
1.5 × 10−6 1.5
𝐹 = 2.666
• Noise figure ' (𝐹)dB '
(𝐹)dB = 10log10 (𝐹) = 10log10 (2.666)
(𝑭)𝐝𝐁 = 𝟒 ⋅ 𝟐𝟔 𝐝𝐁

Problem 2.
The Signal to noise ratio at the input of an amplifier is 40 dB. If the noise figure of an amplifier is
20 dB, calculate the signal to noise ratio in dB at the amplifier output.
Solution:
Given: (𝑆/𝑁)𝑖 = 40 dB, (𝑆/𝑁)o =? , (𝑆/𝑁)o = 20 dB We know that Noise figure ' (𝐹)dB '
(𝑭)𝐝𝐁 = (𝑺/𝑵)𝒊 𝐝𝐁 − (𝑺/𝑵)𝟎 𝐝𝐁
(𝑆/𝑁)0 dB = (𝑆/𝑁)𝑖𝑑𝐵 − (𝐹)dB
= 40 dB − 20 dB
(𝑺/𝑵)𝟎 𝐝𝐁 = 𝟐𝟎 𝐝𝐁

3.5.2. Equivalent noise temperature at amplifier input


We know that, the noise power due to amplifier, having a noise factor ' 𝐹 ' is,
𝑃na = (𝐹 − 1)𝐾𝑇𝐵𝑁
If ' 𝑇𝑒 ' represents the equivalent noise temperature representing noise power, then,
𝑃𝑛𝑎 = 𝐾𝑇𝑒 𝐵𝑁
Equating Eq. (4.1) and (4.2), we get
𝐾𝑇𝑒 𝐵𝑁 = (𝐹 − 1)𝐾𝑇𝐵𝑁
𝑇𝑒 = (𝐹 − 1)𝑇
𝑇𝑒
(𝐹 − 1) =
𝑇
𝑇𝑒
𝐹 = +1
𝑇

Page | 14
Communication Engineering Unit-3: Noise in Communication System
3.5.3. Noise temperature of cascaded Network
1 Derive an expression for overall equivalent noise temperature of the cascade connection
of any number of noises for two port Network.
• It is possible to develop an expression for the overall noise temperature using Friis
formula i.e.
𝐹2 − 1 𝐹3−1
𝐹 = 𝐹1 + + +⋯
𝐺1 𝐺1 𝐺2
Subtract 1 from both sides of Eq. (4.4), we get
𝐹2 − 1 𝐹3−1
𝐹 − 1 = 𝐹1 − 1 + + +⋯
𝐺1 𝐺1 𝐺2
• If ' 𝑇𝑒 ' is overall equivalent noise temperature of the cascade, while 𝑇𝑒1 , 𝑇𝑒2 , ⋯ are
𝑇
corresponding values for each amplifier in cascade, then from Eq. 4.3 ⋅ 𝑇𝑒 = (𝐹 − 1) ', we
have,
𝑇𝑒 𝑇𝑒1 𝑇𝑒2 /𝑇 𝑇𝑒3 /𝑇
= + + +
𝑇 𝑇 𝐺1 𝐺1 𝐺2
𝑇𝑒2 𝑇𝑒3
𝑇𝑒 = 𝑇𝑒1 + + +⋯
𝐺1 𝐺1 𝐺2

3.5.4. Noise factor of amplifier in cascade

Fig. 1. Noise factor of two amplifiers in cascade

• Consider two amplifiers connected in cascade of shown above. The available noise
power at the output of the first amplifier is,
𝑷𝒏𝟎𝟏 = 𝑭𝟏 𝑮𝟏 𝑲𝑻𝟎 𝑩𝑵
• This is available to the second amplifier and second amplifier has noise (𝐹2 − 1)𝐾𝑇𝐵𝑁 of
its own at its input of the second amplifier is,
𝑃𝑛𝑖2 = 𝐹1 𝐺1 𝐾𝑇𝐵𝑁 + (𝐹2 − 1)𝐾𝑇𝐵𝑁
• Consider second amplifier as a noiseless amplifier with amplifier gain ' 𝐺2 ' We have,
𝑃no2 = 𝐺2 𝑃ni2
Substituting Eq. (4.6) in Eq. (4.7), we get,
𝑃no2 = 𝐺2 [𝐹1 𝐺1 𝐾𝑇𝐵𝑁 + (𝐹2 − 1)𝐾𝑇𝐵𝑁 ]
• We know that, the overall voltage gain of the two amplifiers in cascade is,
𝐺 = 𝐺1 𝐺2 and
• From figure, the overall noise power is,
𝑃𝑛0 = 𝐹𝐺1 𝐺2 𝐾𝑇𝐵𝑁
• Equating Eq. (4.8) and (4.9), we get
𝑃𝑛0 = 𝑃𝑛O2
𝐹𝐺1 𝐺2 𝐾𝑇𝐵𝑁 = 𝐺2 [𝐹1 𝐺1 𝐾𝑇𝐵𝑁 + (𝐹2 − 1)𝐾𝑇𝐵𝑁 ]
𝐹1 𝐺1 𝐺2 𝐾𝑇𝐵𝑁 + (𝐹2 − 1)𝐺2 𝐾𝑇𝐵𝑁
𝐹 =
𝐺1 𝐺2 𝐾𝑇𝐵𝑁
𝐸1 𝐺1 𝐺2 𝐾𝑇𝐵𝑁 (𝐹2 − 1)𝐺2 𝐾𝑇𝐵𝑁
𝐹 = +
𝐺1 𝐺2 𝐾𝑇𝐵𝑁 𝐺1 𝐺2 𝐾𝑇𝐵𝑁
(𝑭𝟐 − 𝟏)
𝑭 = 𝑭𝟏 +
𝑮𝟏
By having 𝐺1 large, the noise contribution of the second stage can be made negligible.
• For multistage amplifier

Page | 15
Communication Engineering Unit-3: Noise in Communication System
(𝐹2 − 1) (𝐹3 − 1)
𝐹 = 𝐹1 + + +⋯
𝐺1 𝐺1 𝐺2
Equation 4.10 is known as 'Friis' Formula.
Note:
For 4 - stage amplifier,
(𝑭𝟐 − 𝟏) (𝑭𝟑 − 𝟏) (𝑭𝟒 − 𝟏)
𝑭 = 𝑭𝟏 + + +
𝑮𝟏 𝑮𝟏 𝑮𝟐 𝑮𝟏 𝑮𝟐 𝑮𝟑

Note:
Representation of in- phase and quadrature Components:
We may represent 𝑛(𝑡) in canonical form:
𝒏(𝒕) = 𝒏𝑰 (𝒕)𝐜𝐨𝐬(𝟐𝝅𝒇𝒄 𝒕) − 𝒏𝑸 (𝒕)𝐬𝐢𝐧(𝟐𝝅𝒇𝒄 𝒕)
Where,
𝑛𝐼 (𝑡) is in-phase component of 𝑛(𝑡) and 𝑛Q (𝑡) is called the quadrature component of 𝑛(𝑡).
Whenever a modulated signal is transmitted through a channel, it is always corrupted by random
noise either in the channel or in the receiver circuit. The noise is modeled as an additive white
Gaussian noise for studying its effect on the demodulated signal.

3.6. RECEIVER MODEL


A receiver can be modeled as a band pass filter [BPF] followed by a demodulator as shown in the
Fig (1).

Fig. 1: Receiver model

𝑠(𝑡) denotes the incoming modulated signal and 𝑤(𝑡) denotes front end receiver noise.

BPF represents the tuned amplifier used in the receiver. The bandwidth of the BPF is equal to the
transmission bandwidth of the modulated signal 𝑠(𝑡).
The PSD [Power spectral density] of the noise 𝜔(𝑡) be denoted by 𝑁0 /2, defined for both positive
and negative frequencies i.e., is the average noise power per unit bandwidth measured at the front
end of the receiver.

The BPF in the receiver model of Fig (1) is ideal, having a bandwidth equal to the transmission
band width 𝐵𝑇 of the modulated signal 𝑠(𝑡) and a mid band frequency equal to the carrier
frequency ' 𝑓𝑐 '.

The filtered noise 𝑛(𝑡) as a narrow band noise represented in the canonical form.
𝒏(𝒕) = 𝒏𝑰 (𝒕)𝐜𝐨𝐬(𝟐𝝅𝒇𝒄 𝒕) − 𝒏𝑸 (𝒕)𝐬𝐢𝐧(𝟐𝝅𝒇𝒄 𝒕)
Where,
𝑛𝐼 (𝑡) is the in-phase noise component and 𝑛𝑄 (𝑡) is the quadrature noise component, both
measured with respect to carrier wave 𝐴𝑐 cos(2𝜋𝑓𝑐 𝑡).

The filtered signal 𝑥(𝑡) available for demodulation is defined by,


𝒙(𝒕) = 𝒔(𝒕) + 𝒏(𝒕)
The details of 𝑠(𝑡) depends on type of modulation used.
The average noise power is equal to 𝑁0 𝐵𝑇

Page | 16
Communication Engineering Unit-3: Noise in Communication System
Input signal to noise ratio (𝑺𝑵𝑹)𝑰 :
(𝑆𝑁𝑅)𝐼 is defined as the ratio of the average power of the modulated signal 𝑆(𝑡) to the average
power of the filtered noise 𝑛(𝑡).
Output signal to noise ratio (𝑺𝑵𝑹)𝟎 :
(𝑆𝑁𝑅)0 is defined as the ratio of the average power of the demodulated message signal to the
average power of the noise both measured at the receiver output.
Channel signal to Noise ratio (𝑺𝑵𝑹)𝑪 :
(𝑆𝑁𝑅)𝑐 is defined as the ratio of the average power of the modulated signal to the average power
of noise in the message band- width, both measured at the receiver input.
Figure of merit (FOM):
It is defined as the ratio of output signal to noise ratio to channel signal to noise ratio (SNR)C .
(𝑺𝑵𝑹)𝟎
𝑭𝑶𝑴 =
(𝑺𝑵𝑹)𝑪

FORMULAE
Power in 𝑺(𝒕)
1 (𝑺𝑵𝑹)𝑪 = Noise power within the bandwidth of 𝒎(𝒕)
Power in 𝑺(𝒕)
2 (𝑺𝑵𝑹)𝑪 = 𝑵𝟎 W

Power in 𝑺(𝒕) Power in 𝑺(𝒕)


(𝑺𝑵𝑹)𝑰 = =
Power in 𝒏(𝒕) 𝑵𝟎 𝑩
Information power at the output of receiver
3 (𝑺𝑵𝑹)𝟎 = Noise power at the output of receiver

Power in 𝒎𝒅 (𝒕)
(𝑺𝑵𝑹)𝟎 =
Power in 𝒏𝒅 (𝒕)
(𝑆𝑁𝑅)0 depends upon the type demodulation process used at the receiver.

(𝑺𝑵𝑹)
4 𝑭𝑶𝑴 = (𝑺𝑵𝑹)𝟎
𝑪

Average RMS power across 𝟏𝛀 load resistor

5 Avg power of message signal 𝑚(𝑡) :


𝑃 = 𝑚2 (𝑡)
6 Avg power of 𝑘𝑚(𝑡) :
[𝑘𝑚(𝑡)]2 = 𝑘 2 𝑚2 (𝑡) = 𝑘 2 𝑃
7 Avg power of 𝐴𝑚 cos(2𝜋𝑓𝑚 𝑡)
𝐴𝑚 2 𝐴2𝑚
[𝐴𝑚 cos(2𝜋𝑓𝑚 𝑡)]2 = [ ] =
√2 2
8 Avg power of 𝑘𝑚(𝑡)cos(2𝜋𝑓𝑚 𝑡)
2 2 2
1 2
[𝑘𝑚(𝑡)cos(2𝜋𝑓𝑚 𝑡)] = 𝑘 𝑚 (𝑡) ( )
√2
𝑘2𝑃
=
2
9 Avg power of 𝐴𝑐 cos(2𝜋𝑐 𝑡)
𝐴𝑐 2 𝐴2𝑐
[𝐴𝑐 cos(2𝜋𝑓𝑐 𝑡)]2 = ( ) =
√2 2
10 Avg power of 𝐾𝐴𝑐 cos(2𝜋𝑓𝑐 𝑡)
𝑘 2 𝐴2𝑐 𝑘 2 𝐴2𝑐
[𝑘𝐴𝑐 cos(2𝜋𝑓𝑐 𝑡)]2 = =
(√2)2 2

Page | 17
Communication Engineering Unit-3: Noise in Communication System
11 Avg power of 𝑛(𝑡) (0 to 𝑊 Hz )
[𝑛(𝑡)]2 = 𝑁0 𝑊
12 Avg power of 𝑘𝑛(𝑡)
[𝑘𝑛(𝑡)]2 = 𝑘 2 (𝑁0 𝑊)
13 Average power of 𝑘𝑚(𝑡)𝑛(𝑡)
[𝑘𝑚(𝑡)𝑛(𝑡)]2 = 𝑘 2 𝑚2 (𝑡)(𝑁0 𝑊) = 𝑘 2 𝑝(𝑁0 𝑊)
14 Avg power of 𝑘 𝑚(𝑡)cos(2𝜋𝑓𝑐 𝑡)
1 2 𝑘 2𝑃
[𝑘𝑚(𝑡) cos(2𝜋𝑓𝑐 𝑡)]2 = 𝑘 2 𝑚2 (𝑡) ( ) =
√2 2

3.6.1. Noise in AM receiver

Fig. 1.: Noise in 𝐀𝐌 receiver


Fig shows noisy model of AM receiver using an envelope detector.
• In AM signal, both side bands (USB and LSB) and the carrier wave are transmitted and is
expressed as:
𝑆(𝑡) = 𝐴𝑐 [1 + 𝐾𝑎 𝑚(𝑡)]cos(2𝜋𝑓𝑐 𝑡)
𝑆(𝑡) = 𝐴𝑐 cos(2𝜋𝑓𝑐 𝑡) + 𝐴𝑐 𝐾𝑎 𝑚(𝑡)cos(2𝜋𝑓𝑐 𝑡)
• The average power of the carrier component is given by:
𝐴2𝑐 2
𝐴𝑐 𝐾𝑎 𝑚(𝑡)cos(2𝜋𝑓𝑐 𝑡) = 𝐾 𝑝
2 𝑎
• The total average power of the full AM Signal 𝑠(𝑡) is:
𝐴2𝑐 𝐴2𝑐 2 𝐴2𝑐
+ 𝐾𝑎 𝑃 = [1 + 𝐾𝑎2 𝑃]
2 2 2
• The average power of noise in the message bandwidth is 𝑊𝑁0 .
Average signal power at receiver input
(𝑆𝑁𝑅)𝐶 =
Average noise power at receiver input
𝐴2𝑐 [1 + 𝐾𝑎2 𝑃] 𝑨𝟐𝒄 [𝟏 + 𝑲𝟐𝒂 𝑷]
(𝑺𝑵𝑹)𝑪 = = → (𝟏)
2(𝑊𝑁0 ) 𝟐𝑾𝑵𝟎
• The filtered signal 𝑥(𝑡) applied to the envelope detector in the receiver model as shown
inFig.1 is as follows: i.e. the input to the demodulator of AM receiver is:
𝒙(𝒕) = 𝒔(𝒕) + 𝒏(𝒕)
𝑥(𝑡) = 𝐴𝑐 [1 + 𝐾𝑎 𝑚(𝑡)] cos(2𝜋𝑓𝑐 𝑡) + 𝑛𝐼 (𝑡) cos(2𝜋𝑓𝑐) − 𝑛𝜃 (𝑡) 𝑠𝑖𝑛(2𝜋𝑓𝑐 𝑡)
𝑥(𝑡) = 𝐴𝐶 cos(2𝜋𝑓𝑐 𝑡) + 𝐴𝐶 𝐾𝑎 𝑚(𝑡) cos(2𝜋𝑓𝑐 𝑡) + 𝑛𝐼 (𝑡) cos(2𝜋𝑓𝑐 𝑡) − 𝑛𝑄 (𝑡) 𝑠𝑖𝑛(2𝜋𝑓𝑐 𝑡)

𝑥(𝑡) = [𝐴𝐶 + 𝐴𝐶 𝐾𝑎 𝑚(𝑡) + 𝑛𝐼 (𝑡)] cos(2𝜋𝑓𝑐 𝑡) − 𝑛𝑄 (𝑡) 𝑠𝑖𝑛(2𝜋𝑓𝑐 𝑡)


𝒙(𝒕) = [𝑨𝑪 (𝟏 + 𝒌𝒂 𝒎(𝒕)) + 𝒏𝑰 (𝒕)] 𝐜𝐨𝐬(𝟐𝝅𝒇𝒄 𝒕) − 𝒏𝑸 (𝒕)𝒔𝒊 𝒏(𝟐𝝅𝒇𝒄 𝒕) → (𝟐)

Fig. 2.: Phasor diagram of 𝐀𝐌 receiver

Page | 18
Communication Engineering Unit-3: Noise in Communication System
• The receiver output,
𝑦(𝑡) = envelope of 𝑥(𝑡)

𝒚(𝒕) = √[ In phase component ]𝟐 + [ Quadrature component ]𝟐

𝑦(𝑡) = √[𝐴𝐶 [1 + 𝐾𝑎 𝑚(𝑡)] + 𝑛𝐼 (𝑡)]2 + 𝑛𝑄2 (𝑡)


• If 𝐴𝑐 > 𝑛𝐼 (𝑡)𝑎𝑛𝑑𝑛𝑄 (𝑡) most of the time we can make the following approximation:
𝒚(𝒕) = 𝑨𝑪 [𝟏 + 𝑲𝒂 𝒎(𝒕)] + 𝒏𝑰 (𝒕) + 𝒏𝑸 (𝒕) → (𝟑)
• The signal term 𝐴𝑐 [1 + 𝑘𝑎 𝑚(𝑡)] will be large compared with the noise term 𝑛𝐼 (𝑡) and
𝑛𝑄 (𝑡), then we may approximate the output 𝑦(𝑡) as:
𝒚(𝒕) ≃ 𝑨 ⏟𝑪 + 𝑨 ⏟𝑰 (𝒕) → (𝟒)
⏟𝒄 𝑲𝒂 𝒎(𝒕) + 𝒏
DC term Message signal Noise
Information power at the output of the receiver
(𝑆𝑁𝑅)0 =
Noise power at the output of the receiver
• The message signal component at the output of the receiver is 𝐴𝑐 𝐾𝑎 𝑚(𝑡).
• ∴ The average power of message signal is,
𝐴2𝑐 2 2 𝐴2𝑐 𝐾𝑎2 𝑝
𝑘 𝑚 (𝑡) =
2 𝑎 2
• The noise component at the output of the receiver is 𝑛𝐼 (𝑡).

∴ The average power of the noise at the receiver output is 𝑊𝑁0


𝑨𝟐𝒄 𝑲𝟐𝒂 𝑷
𝟐 𝑨𝟐𝒄 𝑲𝟐𝒂 𝑷
∴ (𝑺𝑵𝑹)𝟎 = = → (𝟓)
𝑵𝟎 𝑾 𝟐𝑵𝟎 𝑾

Eq. (5) is valid only if the following two conditions:

1 The average noise power is small compared to the average carrier power at the
envelope detector input.
2 The amplitude sensitivity 𝐾𝑎 is adjusted for a percentage modulation less than or equal
to 100%.
(𝑆𝑁𝑅)
∴ We know that, FOM = (𝑆𝑁𝑅)0
𝑐
𝐴2𝑐 𝐾𝑎2 𝑃
2𝑊𝑁0
= 2
𝐴𝑐 [1 + 𝐾𝑎2 𝑃]
2𝑊𝑁0
𝑲𝟐𝒂 𝑷
𝐅𝐎𝐌 =
𝟏 + 𝑲𝟐𝒂 𝑷

3.6.2. Single tone 𝐀𝐌


Derive the expression for the figure of merit of an AM receiver operating on single tone 𝑨𝑴.

• Let 𝑚(𝑡) be a single tone modulating signal defined by


𝑚(𝑡) = 𝐴𝑚 cos 2𝜋𝑓𝑚 𝑡
Where, 𝑓𝑚 is frequency of modulating signal.
𝐴𝑚 amplitude of modulating signal.
• The corresponding 𝐴𝑀 wave is:
𝑠(𝑡) = 𝐴𝑐 [1 + 𝜇cos(2𝜋𝑓𝑚 𝑡)]cos(2𝜋𝑓𝑐 𝑡)
Where 𝜇 = 𝐾𝑎 𝐴𝑚
• The average power of the modulating wave 𝑚(𝑡) is [assuming 𝑅 = 1Ω ]
𝑨𝟐𝒎
𝑷= → (𝟏)
𝟐
We know that, FOM for AM receiver is:

Page | 19
Communication Engineering Unit-3: Noise in Communication System
𝑲𝟐𝒂 𝑷
𝐅𝐎𝐌 = → (𝟐)
𝟏 + 𝑲𝟐𝒂 𝑷
Substituting Eq. (1) in Eq. (2), we get
𝐴2
𝐾𝑎2 2𝑚
FOM =
𝐴2
1 + 𝐾𝑎2 2𝑚
𝜇2 /2
FOM = Where 𝜇 = 𝐾𝑎 𝐴𝑚
1 + 𝜇2 /2
𝜇2 /2
=
2 + 𝜇2
2
𝝁𝟐
𝐅𝐎𝐌 =
𝟐 + 𝝁𝟐
When 𝝁 = 𝟏, which corresponds 𝟏𝟎𝟎% modulation, we get FOM equal to 𝟏/𝟑.

Threshold effect
• when the carrier to noise ratio is small compared with unity, the noise dominates and
the performance of the envelope detector changes completely.
• To understand the analysis, the narrow band noise is represented in envelope 𝑟(𝑡) and
phase 𝜓(𝑡) as
𝑛(𝑡) = 𝑟(𝑡)cos[2𝜋𝑓𝑐 𝑡 + 𝜓(𝑡)]
• The corresponding phasor diagram for the detector input 𝑥(𝑡) = 𝑠(𝑡) + 𝑛(𝑡) is shown
as:

Fig. 1: Phase diagram for AM wave plus narrow band noise for the case of low carrier to
noise ratio.
• 𝐴𝑐 is small compared wilt the noise envelope 𝑟(𝑡), then we may neglect the quadrature
component of the signal with respect to time.
• The envelope detector output is approximately:
𝑦(𝑡) = 𝑟(𝑡) + 𝐴𝑐 [1 + 𝐾𝑎 𝑚(𝑡)]cos[𝜓(𝑡)]
𝑦(𝑡) = 𝑟(𝑡) + 𝐴𝑐 cos[𝜓(𝑡)] + 𝐴𝑐 𝐾𝑎 𝑚(𝑡)cos[𝜑(𝑡)]
• The last term of 𝑦(𝑡) contains the message signal 𝑚(𝑡) multiplied by noise in the form of
cos[𝜓(𝑡)]
• Because of low carrier to noise power, a complete loss of information in the detector
output i.e; detector output does not contain message signal 𝑚(𝑡) at all.
• The loss of message in an envelope detector that operates at a low carrier to noise ratio
is referred as threshold effect.

Page | 20
Communication Engineering Unit-3: Noise in Communication System

3.6.3 DSB-SC receiver:

Fig. 1: Model of DSB-SC receiver using coherent detector

The band pass filtered signal 𝑥(𝑡) is applied to the product modulator of coherent detector to
which a locally generated sinusoidal wave cos 2𝜋𝑓𝑐 𝑡 is applied. The output of the multiplier is then
passed through an LPF.

The DSB-SC component of the filtered signal 𝑥(𝑡) is given by:


𝑺(𝒕) = 𝑪 𝑨𝒄 𝐜𝐨𝐬(𝟐𝝅𝒇𝒄 𝒕) 𝒎(𝒕) → (𝟏)
Where
𝐴𝑐 cos(2𝜋𝑓𝑐 𝑡) is the sinusoidal carrier wave
𝑚(𝑡) is message signal
𝐶 is system-dependent scaling factor.

The PSD of 𝑚(𝑡) is 𝑆𝑀 (𝑓) is limited to a message bandwidth ' 𝜔 '.


The average power 𝑃 of the message signal is
𝜔
𝑃 = ∫ 𝑆𝑀 (𝑓)𝑑𝑓
−𝜔

𝑨𝟐𝑪
The average power of the DSB-SC modulated signal component 𝑆(𝑡) is 𝑪𝟐 𝟐
𝑷.
The average noise power in the message bandwidth 𝜔 is equal to No 𝜔.
The channel signal to noise ratio is given by:

Average signal power at reciever 𝒊𝒏𝒑𝒖𝒕


(𝑺𝑵𝑹)𝒄 =
Average signal noise power at reciever input
𝑨𝟐
𝒄𝟐 𝑪 𝑷 𝑪𝟐 𝑨𝟐𝑪 𝑷
(𝑺𝑵𝑹)𝒄 = 𝟐 = → (2)
𝑵𝟎 𝝎 𝟐𝑵𝟎 𝝎

where the constant 𝐶 2 in the numerator ensures that this ratio is dimensionless.

The total signal at the input of coherent detector is given by:

𝒙(𝒕) = 𝑪 𝑨𝒄 𝐜𝐨𝐬(𝟐𝝅𝒇𝒄 𝒕)𝒎(𝒕) + 𝒏𝑰 (𝒕) 𝐜𝐨𝐬(𝟐𝝅𝒇𝒄 𝒕) − 𝒏𝑸 (𝒕) 𝐬𝐢𝐧(𝟐𝝅𝒇𝒄 𝒕) → (𝟑)

where 𝑛𝐼 (𝑡) & 𝑛𝑄 (𝑡) are in-phase & quadrature phase components of 𝑛(𝑡).
The 𝑜𝑢𝑡𝑝𝑢𝑡 of the product modulator is given by:
𝑽(𝒕) = 𝒙(𝒕) 𝐜𝐨𝐬(𝟐𝝅𝒇𝒄 𝒕) → (𝟒)
Substitute Eq. (3) in Equation (4), we get,

Page | 21
Communication Engineering Unit-3: Noise in Communication System

𝑉(𝑡) = [𝐶𝐴𝑐 cos(2𝜋𝑓𝑐 𝑡)𝑚(𝑡) + 𝑛I (𝑡) cos(2𝜋𝑓𝑐 𝑡) − 𝑛𝑄 (𝑡) sin(2𝜋𝑓𝑐 𝑡)] cos(2𝜋𝑓𝑐 𝑡)
𝑉(𝑡) = 𝐶𝐴𝐶 cos 2 (2𝜋𝑓𝑐 𝑡)𝑚(𝑡) + 𝑛𝐼 (𝑡) cos 2 (2𝜋𝑓𝑐 𝑡) − 𝑛𝑄 (𝑡) sin(2𝜋𝑓𝑐 𝑡) cos(2𝜋𝑓𝑐 𝑡)
1 cos 2𝜃 1 cos(4𝜋𝑓𝑐𝑡)
Wkt cos2 𝜃 = + i.e. cos2 (2𝜋𝑓𝑐 𝑡) = +
2 2 2 2
1 1
sin 𝐴 ⋅ cos 𝐵 = sin 2 A i.e. sin(2𝜋𝑓𝑐 𝑡)cos(2𝜋𝑓𝑐 𝑡) == sin(4𝜋𝑓𝑐 𝑡)
2 2
1 1 1 1 1
𝑉(𝑡) = 𝐶𝐴𝐶 𝑚(𝑡) [ + cos(4𝜋𝑓𝑐 𝑡)] + 𝑛𝐼 (𝑡) [ + cos(4𝜋𝑓𝑐 𝑡)] − 𝑛𝑄 (𝑡) sin(4𝜋𝑓𝑐 𝑡)
2 2 2 2 2
𝟏 𝟏 𝟏 𝟏
𝑽(𝒕) = 𝑪𝑨𝑪 𝒎(𝒕) + 𝒏𝑰 (𝒕) + 𝐜𝐨𝐬(𝟒𝝅𝒇𝒄 𝒕)[𝑪𝑨𝑪 𝒎(𝒕) + 𝒏𝑰 (𝒕)] − 𝒏𝑸 (𝒕)𝐬𝐢𝐧(𝟒𝝅𝒇𝒄 𝒕) → (𝟓)
𝟐 𝟐 𝟐 𝟐

Eq. (5) is passed through LPF, which removes the high frequency components of 𝑉(𝑡). Hence the
reciever output is given by:
𝟏 𝟏
𝒚(𝒕) = 𝑪𝑨𝑪 𝒎(𝒕) + 𝒏𝑰 (𝒕) → (𝟔)
𝟐 𝟐
Eq. (6) indicates the following:
i) The message signal 𝑚(𝑡) & in-phase noise component 𝑛𝐼 (𝑡) of filtered noise 𝑛(𝑡) appear
additively at receiver output.

ii) The quadrature component ℎQ (𝑡) of the noise 𝑛(𝑡) is completely rejected by the coherent
detector.
Information power at the output of receiver
The (𝑺𝑵𝑹)𝟎 =
Noise power at the output of receiver
1
The message signal component at the receiver output is 2 𝐶𝐴𝐶 𝑚(𝑡)

∴ The average power of this component is

1 𝐶 2 𝐴2𝑐 𝑚2 (𝑡) 1 2 2
→ 𝐶 𝐴𝑐 𝑃
2 2 4
𝑛1 (𝑡)
The noise component at the receiver output is
2

∴ The average power of the noise at receiver output is

1 2 1
( ) × 𝑁0 𝑊 = 𝑁0 𝑊
√2 2
𝐶 2 𝐴2𝐶 𝑃/4
(𝑆𝑁𝑅)0 =
(1/2)𝑁0 𝑊
𝐶 2 𝐴2𝐶 𝑃 × 2
(𝑆𝑁𝑅)0 =
4 × 𝑁0 𝑊
𝑪𝟐 𝑨𝟐𝑪 𝑷
(𝑺𝑵𝑹)𝟎 = → (𝟕)
𝟐𝑵𝟎 𝑾
𝐶2 𝐴2
𝐶𝑃
(𝑆𝑁𝑅)0 2𝑁0 𝜔
Figure of Merit FOM = (𝑆𝑁𝑅)𝑐
= 𝐶2 𝐴2
=1
𝑐𝑃
2𝑁0 𝜔

𝑭𝑶𝑴 = 1

Page | 22
Communication Engineering Unit-3: Noise in Communication System
3.6.4. Noise in FM receiver

Fig. 1.: Noise Model of an FM receiver


The noise 𝜔(𝑡) is modeled as white Gaussian Noise of zero mean and PSD No/2. The FM signal
𝑆(𝑡) has a carrier frequency 𝑓𝑐 and transmission bandwidth 𝐵𝑇 .
In an FM system, the message information is transmitted by variations of instantaneous
frequency of a sinusoidal carrier wave and its amplitude is maintained constant.
∴ Any variations of the carrier amplitude at the receiver input must result from noise or
interference. The amplitude limiter, following the BPF in the receiver is used to remove amplitude
variations by clipping the modulated wave at the filter output.

The discriminator in the model consists of 2 components:


1. A slope network or differentiator with a purely imaginary transfer function.
2. An envelope detector.
The filtered noise 𝑛(𝑡) at the BPF output in the fig. is defined in terms of its in-phase and
quadrature components by:
𝒏(𝒕) = 𝒏𝑰 (𝒕)𝐜𝐨𝐬(𝟐𝝅𝒇𝒄 𝒕) − 𝒏𝑸 (𝒕)𝐬𝐢𝐧(𝟐𝝅𝒇𝒄 𝒕)
• We may express 𝑛(𝑡) in terms of its envelope and phase as:
𝒏(𝒕) = 𝒓(𝒕)𝐜𝐨 𝐬[(𝟐𝝅𝒇𝒄𝒕) + 𝝍(𝒕)] → (𝟏)
Where the envelope is,
𝑛(𝑡) = √𝑛𝐼2 (𝑡) + 𝑛𝑄2 (𝑡)
1/2
𝑛(𝑡) = [𝑛𝐼2 (𝑡) + 𝑛𝑄2 (𝑡)]
and the phase is:
𝑛𝑄 (𝑡)
𝜓 = tan−1 [ ]
𝑛𝐼 (𝑡)
• The incoming FM signal 𝑠(𝑡) is defined by,
𝑡
𝑆(𝑡) = 𝐴𝑐 cos [2𝜋𝑓𝑐 𝑡 + 2𝜋𝐾𝑓 ∫ 𝑚(𝑡)𝑑𝑡]
0
𝑡
Let 𝜙(𝑡) = 2𝜋𝐾𝑓 ∫ 𝑚(𝑡)𝑑𝑡
0
Then, 𝑺(𝒕) = 𝑨𝒄 𝐜𝐨 𝐬[𝟐𝝅𝒇𝒄 𝒕 + 𝝓(𝒕)] → (𝟐)
• The noisy signal at the BPF output is therefore,
𝒙(𝒕) = 𝒔(𝒕) + 𝒏(𝒕) → (𝟑)
Substitute Eq. (1) and Eq. (2) in Eq. (3)
𝑥(𝑡) = 𝐴𝑐 cos[2𝜋𝑓𝑐 𝑡 + 𝜙(𝑡)] + 𝑟(𝑡)cos[2𝜋𝑓𝑐 𝑡 + 𝜓(𝑡)]
The total signal 𝑥(𝑡) can be represented by a phasor diagram as shown in the fig below.

Fig. 2.: Phasor Diagram

The resultant phasor representation 𝑥(𝑡) is given by,


𝒓(𝒕)𝐬𝐢 𝐧[𝝍(𝒕) − 𝝓(𝒕)]
𝜽(𝒕) − 𝝓(𝒕) = 𝐭𝐚𝐧−𝟏 { } → (𝟒)
𝑨𝒄 + 𝒓(𝒕)𝐜𝐨 𝐬[𝝍(𝒕) − 𝝓(𝒕)]

Page | 23
Communication Engineering Unit-3: Noise in Communication System
• Let us assume that carrier SNR at the discriminator input to be much larger than unity.
Then Eq. (4) reduces to:
𝑟(𝑡)
𝜃(𝑡) = 𝜙(𝑡) + sin[𝜓(𝑡) − 𝜙(𝑡)]
𝐴𝑐
𝑡
We know that, 𝜙(𝑡) = 2𝜋𝐾𝑓 ∫0 𝑚(𝑡)𝑑𝑡
𝒕
𝒓(𝒕)
𝜽(𝒕) = 𝟐𝝅𝑲𝒇 ∫ 𝒎(𝒕)𝒅𝒕 + 𝐬𝐢 𝐧[𝝍(𝒕) − 𝝓(𝒕)] → (𝟓)
𝟎 𝑨𝒄

∴ The discriminator output is,


𝟏 𝒅
𝑽(𝒕) = 𝜽(𝒕) → (𝟔)
𝟐𝝅 𝒅𝒕
Substituting Eq. (5) in Eq. (6)
𝑡
1 𝑑 𝑟(𝑡)
𝑉(𝑡) = {2𝜋𝐾𝑓 ∫ 𝑚(𝑡)𝑑𝑡 + sin[𝜓(𝑡) − 𝜙(𝑡)]}
2𝜋 𝑑𝑡 0 𝐴𝑐
1 𝑑 𝑡 1 𝑑 𝑟(𝑡)
= 2𝜋𝐾𝑓 ∫ 𝑚(𝑡)𝑑𝑡 + { sin[𝜓(𝑡) − 𝜙(𝑡)]}
2𝜋 𝑑𝑡 0 2𝜋 𝑑𝑡 𝐴𝑐
1 𝑑
𝑉(𝑡) = 𝐾𝑓 𝑚(𝑡) + {𝑟(𝑡) sin[𝜓(𝑡) − 𝜙(𝑡)]}
2𝜋𝐴𝑐 𝑑𝑡
𝑽(𝒕) = 𝑲𝒇 𝒎(𝒕) + 𝒏𝒅 (𝒕) → (𝟕)

Where,
1 𝑑
𝑛𝑑 (𝑡) = {𝑟(𝑡)sin[𝜓(𝑡) − 𝜙(𝑡)]
2𝜋𝐴𝑐 𝑑𝑡

• The discriminator output 𝑉(𝑡) Eq. (7) consists of the original message 𝑚(𝑡) with a
multiplying factor 𝐾𝑓 plus the additional noise component 𝑛𝑑 (𝑡).
• The presence of 𝜙(𝑡) is 𝑛𝑑 (𝑡) produces components in the power spectrum of the noise
𝑛𝑑 (𝑡) at frequencies lying outside the message band. Therefore 𝜙(𝑡) can be assumed to
be zero in 𝑛𝑑 (𝑡).
1 𝑑
∴ 𝑛𝑑 (𝑡) = {𝑟(𝑡)sin[𝜓(𝑡) − 0]}
2𝜋𝐴𝑐 𝑑𝑡
𝟏 𝒅
𝒏𝒅 (𝒕) = {𝒓(𝒕)𝐬𝐢 𝐧 𝝍(𝒕)} → (𝟖)
𝟐𝝅𝑨𝒄 𝒅𝒕
In Eq. (8), the term 𝑟(𝑡)sin 𝜓(𝑡) is the quadrature component of the narrow band noise
𝑛(𝑡) i.e;
𝑛𝑄 (𝑡) = 𝑟(𝑡)sin 𝜓(𝑡)
Then,
𝟏 𝒅
𝒏𝒅 (𝒕) = 𝒏 (𝒕) → (𝟗)
𝟐𝝅𝑨𝒄 𝒅𝒕 𝑸
• Output signal to noise ratio:
Average output signal power
(𝑆𝑁𝑅)0 =
Average output noise power
From Eq. (7) average output of signal power is 𝐾𝑓2 𝑚2 (𝑡)
i.e., 𝐾𝑓2 𝑃.
• The PSD If the output noise no(t) appearing at the receiver output is given by:
𝑁 𝑓 2 /𝐴2𝐶 , |𝑓| ⩽ 𝑊
𝑆𝑁0 (𝑓) = { 0
0, Otherwise

Page | 24
Communication Engineering Unit-3: Noise in Communication System
Average power of output Noise is,
𝑊
=∫ 𝑆𝑁0 (𝑓) ⋅ 𝑑𝑓
−𝑊
𝑊
𝑁0 𝑓 2
=∫ 2 𝑑𝑓
−𝑊 𝐴𝑐
𝑁0 𝑊
= 2 ∫ 𝑓 2 𝑑𝑓
𝐴𝑐 −𝑊
𝑊
𝑁0 𝑓 3
= |
𝐴2𝑐 3 −𝑊
𝑁0
= 2 [𝑊 3 − (−𝑊)3 ]
3𝐴𝑐
𝑁0
= 2 [𝑊 3 + 𝑊 3 ]
3𝐴𝑐
𝑁0
Average power of output Noise = ⋅ 2𝑊 3
3𝐴2𝑐
2𝑁0 𝑊 3
=
3𝐴2𝑐
𝐾𝑓2 𝑃
(𝑆𝑁𝑅)0 =
𝑁0 2𝑊 3
3𝐴2𝐶
3𝐾𝑓2 𝑃
(𝑆𝑁𝑅)0 =
2𝑁0 𝑊 3
Average power in modulated signal
(𝑺𝑵𝑹)𝑪 =
Average noise power in the message bandwidth

𝐴 2 𝐴2𝐶
Average power in modulated signal = 𝑃 = ( 𝑐 ) =
√2 2
Average power in the message bandwidth = 𝑁0 W
𝐴2𝑐 /2
(𝑆𝑁𝑅)𝑐 =
𝑁0 𝑊
𝐴2𝑐
(𝑆𝑁𝑅)𝑐 =
2𝑁0 𝑊
(𝑆𝑁𝑅)0
FOM =
(𝑆𝑁𝑅)𝑐
2 2
3𝐴𝑐 𝐾𝑓 𝑃
2𝑁0 𝑊 3
=
𝐴2𝑐
2𝑁0 𝑊
3𝐴2𝑐 𝐾𝑓2 𝑃 2𝑁0 𝑊 2
= ×
2𝑁0 𝑊 32 𝐴2𝑐
3𝐾𝑓2 𝑃
FOM =
𝑊2

Page | 25
Communication Engineering Unit-3: Noise in Communication System
3.6.4. Single tone FM:
Derive the expression for the Figure of Merit of an FM receiver operating on single tone FM.

Consider a sinusoidal wave of frequency 𝑓𝑚 or 𝑊 as the modulating signal. The 𝐹𝑀 modulated


wave is given by:
𝑡
𝑆(𝑡) = 𝐴𝑐 [cos(2𝜋𝑓𝑐 𝑡) + 2𝜋𝐾𝑓 ∫ 𝑚(𝑡)𝑑𝑡]
0
𝑡𝑚
= 𝐴𝑐 cos [2𝜋𝑓𝑐 𝑡 + 2𝜋𝐾𝑓 ∫ 𝐴𝑚 cos(2𝜋𝑓𝑚 𝑡)𝑑𝑡]
0
𝑡
= 𝐴𝑐 cos [2𝜋𝑓𝑐 𝑡 + 2𝜋𝐾𝑓 𝐴𝑚 ∫ cos(2𝜋𝑓𝑚 𝑡)𝑑𝑡]
0
sin(2𝜋𝑓𝑚 𝑡)
= 𝐴𝑐 cos [2𝜋𝑓𝑐 𝑡 + 2𝜋𝐾𝑓 𝐴𝑚 ⋅ ]
2𝜋𝑓𝑚
𝐾𝑓 𝐴𝑚
= 𝐴𝑐 cos [2𝜋𝑓𝑐 𝑡 + sin(2𝜋𝑓𝑚 𝑡)]
𝑓𝑚
𝚫𝒇
𝑺(𝒕) = 𝑨𝒄 𝐜𝐨 𝐬 [𝟐𝝅𝒇𝒄 𝒕 + 𝐬𝐢 𝐧(𝟐𝝅𝒇𝒎 𝒕)] → (𝟏)
𝒇𝒎
Then,
𝒕
𝚫𝒇
𝟐𝝅𝑲𝒇 ∫ 𝒎(𝒕) = 𝐬𝐢𝐧 𝟐𝝅𝒇𝒎 𝒕 → (𝟐)
𝟎 𝒇 𝒎
Differentiating both sides of Eq. (2) with respect to 𝑡, we get
Δ𝑓
2𝜋𝐾𝑓 𝑚(𝑡) = cos 2𝜋𝑓𝑚 𝑡 ⋅ 2𝜋𝑓𝑚
𝑓𝑚
𝚫𝒇
𝒎(𝒕) = 𝐜𝐨 𝐬(𝟐𝝅𝒇𝒎 𝒕) → (𝟑)
𝑲𝒇
The average power of the message signal 𝑚(𝑡), developed across a 1Ω load is:
Δ𝑓 2
𝑃 =
2𝐾𝑓2
𝚫𝒇𝟐
𝑲𝟐𝒇 𝑷 = → (𝟒)
𝟐
We know that
𝟑𝑨𝟐𝒄 𝑲𝟐𝒇 𝑷
(𝑺𝑵𝑹)𝟎 = → (𝟓)
𝟐𝑵𝟎 𝑾𝟑
Substituting Eq. (4) in Eq. (5)
3𝐴2𝑐 Δ𝑓 2 /2
(𝑆𝑁𝑅)0 =
2𝑁0 𝑊 3
𝟑𝑨𝟐𝒄 𝚫𝒇𝟐
(𝑺𝑵𝑹)𝟎 = → (𝟔)
𝟒𝑵𝟎 𝑾𝟑
Δ𝑓
We know that, 𝛽 = 𝑊
𝚫𝒇 = 𝜷𝑾 → (𝟕)
Substituting Eq. (7) in Eq. (6), we get,
3𝐴2𝑐 (𝛽𝑊)2
(𝑆𝑁𝑅)0 =
4𝑁0 𝑊 3
3𝐴2𝑐 𝛽2 𝑊 2
=
4𝑁0 𝑊 3
𝟑𝑨𝟐𝑪 𝜷𝟐
(𝑺𝑵𝑹)𝟎 =
𝟒𝑵𝟎 𝑾
We know that
𝐴2𝐶
(𝑆𝑁𝑅)𝑐 =
2𝑁0 𝑊
Page | 26
Communication Engineering Unit-3: Noise in Communication System
(𝑆𝑁𝑅)0
∴ FOM =
(𝑆𝑁𝑅)𝐶
3𝐴2𝐶 𝛽 2
4𝑁0 𝑊
=
𝐴2𝐶
2𝑁0 𝑊
3𝐴2𝐶 𝛽 2 2𝑁0 𝑊
×
4𝑁0 𝑊 𝐴2𝐶
𝟑 𝟐
𝐅𝐎𝐌 = 𝜷
𝟐
• The FM has improved SNR over AM if:
3 2 1
𝛽 >
2 3
2
2 1
𝛽 > ×
3 3
2
𝛽2 >
9
2
𝛽 >√
9
√𝟐
i.e., 𝜷 > 𝟑
𝜷 = 𝟎. 𝟓
NBFM having only two side bands and is similar to AM system. ∴ 𝛽 = 0.5 defines the transition
from NBFM to WBFM.

Capture effect
• Consider an FM signal having carrier frequency 𝑓𝑐 . If there is another FM signal whose
spectral content is centered around 𝑓𝑐 , then the second signal is known as an interference
signal.
• Interference suppression is an FM receiver, it works well only when the interference is
weaker than the desired FM input.
• When the interference is stronger than FM input, the receiver locks onto the stronger
signal and thereby suppresses the desired FM input.
When the strength of the desired signal and the interference signal are nearly equal, the
receiver fluctuates back and forth between them (i.e. receiver locks interference signal
for some time and desired signal for the same time).
This phenomenon is known as the capture effect.

3.6.5. Pre-emphasis and De-emphasis in FM

Fig. 1(a): PSD of noise at FM receiver output

Page | 27
Communication Engineering Unit-3: Noise in Communication System

Fig. 1 (b): PSD of a typical message signal

• Fig. 1(a) shows the PSD of noise at the output of an EM receiver. Fig. (b) shows PSD of a
typical message source: audio and video.
• The PSD of message signal falls off at higher frequencies. PSD of the output noise increases
rapidly with frequency. Thus, around 𝑓 = ±𝑊, the relative spectral density of the
message signal is quite low, whereas that of the output noise is quite high.
• It has been proved that in FM, the noise has a greater effect on the higher modulating
frequencies. This effect can be reduced by increasing the value of modulation index (𝛽)
for higher modulating frequencies 𝑓𝑚 . This can be done by increasing the deviation ' 𝑑 '
and 𝛿 can be increased by increasing the amplitude of modulating signal at higher
modulating frequencies.

Fig. 2. Pe-emphasis and de-emphasis in a FM system

Fig. 3. Pre-emphasis filter

Fig. 4. De-emphasis filter

• Thus, if we 'boost' the amplitude of higher frequency to improve the noise immunity at
higher modulating frequencies. The artificial boosting of higher modulating frequencies

Page | 28
Communication Engineering Unit-3: Noise in Communication System
is called pre-emphasis Fig. (b) shows pre-emphasis circuit and frequency response
characteristics.
• The modulating 𝐴𝐹 signal is passed through a high pass RC filter, before applying it to the
FM modulator.
• As ' 𝑓𝑚 ' increases, reactance of ' 𝐶 ' decreases and modulating voltage applied to FM
modulator goes on increasing.
• The artificial boosting given to the higher modulating frequencies in the process of pre-
emphasis is nullified or compensated at the receiver by a process called De-emphasis.
• The artificially boosted high frequency signals are bought to their original amplitude
using the de-emphasis circuit.
• The demodulated FM is applied to the de-emphasis circuit with increase in ' 𝑓𝑚 ', the
reactance of ' 𝑐 ' goes on decreasing and the output of de-emphasis circuit will reduce.
• To recover the original message signal without distortion it is required that transfer
function of pre-emphasis and de-emphasis filters must have an inverse relationship.
1
𝐻𝑑𝑒 (𝑓) = −𝑤 ⩽𝑓 ⩽𝑤
𝐻𝑝𝑒 (𝑓)
• The PSD of the noise 𝑛𝑑 (𝑡) at the discriminator output is,
𝑁0 𝑓 2 𝐵𝑇
𝑆𝑁𝑑 (𝑓) = { 𝐴𝑐 2 , 𝑓 ⩽
2
0 Otherwise
• Average output noise power with de-emphasis is,
= 𝑆𝑁𝑑 (𝑓)|𝐻𝑑𝑒 (𝑓)|2
𝑤
𝑁0 𝑓 2
=∫ 2
(𝐻𝑑𝑒 (𝑓))2 𝑑𝑓
−𝑤 𝐴 𝐶

• Average output noise power with de-emphasis is


𝑵𝟎 𝒘
= 𝟐 ∫ 𝒇𝟐 (𝑯𝒅𝒆 (𝒇))𝟐 𝒅𝒇 → (𝟏)
𝑨𝒄 −𝒘
The average output noise power without pre-emphasis and de-emphasis is,
𝟐𝑵𝟎 𝑾𝟑
= → (𝟐)
𝟑𝑨𝟐𝑪

Improvement factor ‘I’


Average output noise power without pre-emphasis and de-emphasis
𝑰= → (𝟑)
Average output noise power with pre-emphasis and de-emphasis
Substitute Eq. (1) and Eq. (2) in Eq. (3)
2𝑁0 𝑊 3
3𝐴2𝐶
𝐼=
𝑁0 𝑤 2
∫ 𝑓 |𝐻𝑑𝑒 (𝑓)|2 𝑑𝑓
𝐴2𝑐 −𝑤
𝟐𝑾𝟑
𝑰= 𝑾
→ (𝟒)
𝟑 ∫−𝑾 𝒇𝟐 |𝑯𝒅𝒆 (𝒇)|𝟐 𝒅𝒇
The transfer function of pre-emphasis is,
𝑗𝑓
𝐻pe (𝑓) = 1 +
𝑓0
The transfer function of de-emphasis is,
1 𝟏
𝑯𝐝𝐞 (𝒇) = = → (𝟓)
𝐻pe (𝑓) 𝒋𝒇
𝟏+
𝒇𝟎

Substitute Eq. (5) in Eq. (4), we get

Page | 29
Communication Engineering Unit-3: Noise in Communication System
2𝑤 3
𝐼=
𝑤 1
3 ∫−𝑤 𝑓 2 𝑑𝑓
1 + (𝑓/𝑓0 )2
𝑾 𝟑
( )
𝒇𝟎
𝑰=
𝑾 𝑾
𝟑 [( ) − 𝐭𝐚𝐧−𝟏 ( )]
𝒇𝟎 𝒇𝟎

Page | 30
Communication Engineering Unit-3: Noise in Communication System

FORMULAE

AM Receiver
1 Carrier power : 𝐴2𝑐
2
2 Power in each side band : 𝜇2 𝐴2𝑐
8
𝜇2 𝐴2𝑐 𝜇2 𝐴2𝑐 𝜇2
𝑖. 𝑒. = = × Carrier power (𝑃𝐶 )
8 4 2 4

3 𝑲𝟐𝒂 𝑨𝟐𝒎 : (𝐾𝑎 𝐴𝑚 )2 = 𝜇2

4 (𝑺𝑵𝑹)𝟎 : 𝐴2𝑐 𝐾𝑎 𝑃
(𝑆𝑁𝑅)0 =
2𝑁0 𝑊
𝐴2𝑚
Where 𝑃 = 2
or
𝐴2𝑐 𝐾𝑎2 𝐴2𝑚 /2 𝐴2𝑐 𝜇 2
(𝑆𝑁𝑅)0 = = Where 𝐾𝑎 𝐴𝑚 = 𝜇
2𝑁0 𝑊 4𝑁0 𝑊
Or
𝐴2𝑐 1 𝜇2
(𝑆𝑁𝑅)0 = × ×
4 𝑁0 𝑊 2

5 𝐅𝐎𝐌 : 𝜇2
𝜇2 + 2

6 (𝐂𝐍𝐑) = 𝝆 : 𝐴2𝐶
𝜌=
2𝐵𝑇 𝑁0
Where 𝐵𝑇 = 2 W
𝐴2𝑐 𝐴2𝑐
𝜌= =
2 × 2𝑊𝑁0 4𝑁0 𝑊

FM Formulae
1 Modulation Index 𝜷 : 𝛽 = Δ𝑓 2. Δ𝑓 = 𝐾𝑓 𝐴𝑚
𝑓𝑚

2 𝐅𝐎𝐌 : 3 2
𝛽
2
3 (𝑺𝑵𝑹)𝒄 : 𝐴2𝑐
2𝑁0 𝑊
4 (𝑺𝑵𝑹)𝟎 : 3𝐴2𝑐 𝐾𝑓2 𝑃
2𝑁0 𝑊 3
We know that, (Δ𝑓)2 = 2𝐾𝑓2 𝑃
𝚫𝒇𝟐
𝑲𝟐𝒇 𝑷 =
𝟐
Δ𝑓 2
3𝐴2𝑐 2
[𝑆𝑁𝑅]0 =
2𝑁0 𝑊 3
3𝐴2𝑐 Δ𝑓 2
[𝑆𝑁𝑅]0 =
4𝑁0 𝑊 3

Page | 31
Communication Engineering Unit-3: Noise in Communication System
PRE-EMPHASIS & DE-EMPHASIS
1 (𝑊/𝑓0 )3
𝑰=
𝑊 𝑊
3 [( ) − tan−1 ( )]
𝑓0 𝑓0

2 𝑊 2 𝑊
( 𝑡 ) tan−1 ( 𝑡 )
0 0
𝑰=
𝑊 −1 𝑊
3 [( 𝑡 ) − tan ( )]
0 𝑓0

Problem 3.
Find the figure of merit when the modulation depth is
i) 100%
ii) 50%
iii) 30%

Solution:

i. 𝜇 = 100% i.e, 𝜇 = 1
𝜇2
FOM =
2 + 𝜇2
1
=
2+1
1
=
3
FOM = 0.333
ii. 𝜇 = 50% i.e, 𝜇 = 0.5
𝜇2
FOM =
2 + 𝜇2
0.5
=
2 + (0.5)2
𝐅𝐎𝐌 = 𝟎. 𝟏𝟏𝟏
iii. 𝜇 = 30% i.e, 𝜇 = 0.3
𝜇2
FOM =
2 + 𝜇2
(0.3)2
=
2 + (0.3)2
0.09
=
2 + 0.09
𝐅𝐎𝐌 = 𝟎. 𝟎𝟒𝟑

Problem 4. An AM receiver operating with a sinusoidal wave and 80% modulation has an output
signal to noise ratio of 30 dB. Calculate the corresponding carrier signal to noise.

Given: 𝜇 = 80% = 0.8, (𝑆𝑁𝑅)0 𝑑𝐵 = 30𝑑𝐵


Solution:

10log10 [𝑆𝑁𝑅]0 = 30
−1
30
[𝑆𝑁𝑅]0 = log10 [ ]
10
[𝑺𝑵𝑹]𝟎 = 𝟏𝟎𝟎𝟎
We know that,

Page | 32
Communication Engineering Unit-3: Noise in Communication System
𝜇2 𝐴2𝑐
[𝑆𝑁𝑅]0 =
4𝑁0 𝑊
[𝑺𝑵𝑹]𝟎 = 𝟏𝟎𝟎𝟎
We know that,
𝑨𝟐𝒄
[𝑪𝑵𝑹] =
𝟒𝑵𝟎 𝑾
From Eq. (4.44) and Eq. (4.43) can be rewritten as:
𝐴2𝑐
𝜇2 = 1000
4𝑁0 𝑊
𝜇2 [𝐶𝑁𝑅] = 1000
1000
[𝐶𝑁𝑅] = 2
𝜇
1000
=
(0.8)2
[𝑪𝑵𝑹] = 𝟏𝟓𝟔𝟐. 𝟓
[𝐶𝑁𝑅]dB = 10log10 [𝐶𝑁𝑅]
= log10 [1562.5]
[𝑪𝑵𝑹]𝐝𝐁 = 𝟑𝟏. 𝟗𝟑 ≈ 𝟑𝟐 𝐝𝐁

Problem 5.
A carrier wave reaching an envelope detector in an AM receiver has an RMS value equal to IV in
the absence of modulation. The noise at the input of the envelope detector has a PSD equal to
10−3 Watts/Hz. If the carrier is modulated to a depth of 100% and message band width = 3.2kHz.
Find (𝑆𝑁𝑅)0 .

𝐴𝐶 𝐴2𝑐
Given: = 1 and = 1.
√2 2
Solution:

𝑊 = 3.2kHz and 𝜇 = 1
𝑁0
= 10−3 W/Hz
2
𝑁0 = 2 × 10−3 W/Hz
We know that,
𝜇2 𝐴2𝑐
[𝑆𝑁𝑅]0 =
4𝑁0 𝑊
𝐴2𝑐 1 𝜇2
= × ×
2 𝑁0 𝑊 2
1 1
=1× −3 3
×
2 × 10 × 3.2 × 10 2
[𝑺𝑵𝑹]𝟎 = 𝟎. 𝟎𝟕𝟖𝟏𝟐

Problem 6.
The PSD of noise at the front end of the receiver is 0.5 × 10−3 W/Hz. The modulating wave 𝑚(𝑡)
is sinusoidal, with a carrier power of 80 kW and a side-band power of 10 kW per side-band. The
message bandwidth is 5kHz. Assume the use of an envelope detector in the receiver, determine
output SNR of the system. Derive the relation used.

𝑁0 𝐴2𝐶
Given: 2
= 0.5 × 10−3 W/Hz, 𝑁0 = 2 × 0.5 × 10−3 = 1 × 10−3 W/Hz, 𝑃𝐶 = 2
= 80 kW, 𝑊 =
𝜇2 𝐴2𝐶
5kHz, 𝑃𝑈𝑆𝐵 = 𝑃𝐿𝑆𝐵 = 8
= 10 kW.

Page | 33
Communication Engineering Unit-3: Noise in Communication System
Solution:
i.
𝝁𝟐 𝑨𝟐
𝑷𝑼𝑺𝑩 = 𝑷𝑳𝑺𝑩 =
𝟖
𝜇2 𝐴2
10 kW = ×
4 2
40 kW = 𝜇2 𝑃𝑐
40 kW
𝜇2 =
PC
40 × 103
=
80 × 103
2
𝜇 = 0.5
𝝁 = 𝟎. 𝟕𝟎𝟕
𝐴2 𝑐 1 𝜇2
[𝑆𝑁𝑅]0 = × ×
2 𝑁0 𝑊 2
1 0.5
= 80 × 103 × −3 3
×
1 × 10 × 5 × 10 2
[𝑺𝑵𝑹]𝟎 = 𝟒𝟎𝟎𝟎
[𝑆𝑁𝑅]0 𝑑𝐵 = 10log[4000]
[𝑺𝑵𝑹]𝟎 𝒅𝑩 = 𝟑𝟔. 𝟎𝟐 𝐝𝐁
ii. Derive output SNR of the system.
Note:
𝜇2
FOM =
2 + 𝜇2
0.5
=
2 + 0.5
𝐅𝐎𝐌 = 𝟎. 𝟐

Problem 7.
The average noise power/unit bandwidth measured at the front end of an AM receiver is
0.5 × 10−3 W/Hz. The modulating wave is sinusoidal, with a carrier power of 80 kW and a side
band power of 10KW side-band. The message bandwidth is 5kHz. Assuming the use of an
envelope detector in the receiver, determine the output of SNR of the system. By how many DBS
is this system inferior to a DSB-SC modulation system.

Solution:
i. Refer problem No: 4.3
ii. We know that, the FOM of DSB-SC is
(𝐅𝐎𝐌)DSB-SC = 𝟏
The FOM of AM is,
(FOM)AM = 0.2
(FOM)DSB-SC 1
=
(FOM)AM 0.2
(𝐅𝐎𝐌)DSB-SC
=𝟓
(𝐅𝐎𝐌)𝐀𝐌
The improvement of noise performance in DSB-SC expressed in dB is,
𝟏𝟎𝐥𝐨𝐠 𝟏𝟎 [𝟓] = 𝟔. 𝟗𝟖 𝐝𝐁

Problem 8.
An AM receiver operating with a sinusoidal modulating wave and 80% modulation has an output
signal to noise ratio of 30 dB. What is the corresponding carrier to noise ratio.
Given: = 0.8, [𝑆𝑁𝑅]0 = 30 dB.

Page | 34
Communication Engineering Unit-3: Noise in Communication System
Solution:

[𝑆𝑁𝑅]0 𝑑𝐵 = 30
[𝑺𝑵𝑹]𝟎 = 𝟏𝟎𝟎𝟎
𝜇2 𝐴2𝑐
[𝑆𝑁𝑅] =
4𝑁0 𝑊
(0.8)2 𝐴2𝑐
1000 =
4𝑁0 𝑊
2
𝐴𝑐 1000
=
4𝑁0 𝑊 (0.8)2
𝑨𝟐𝒄
= 𝟏𝟓𝟔𝟑
𝟒𝑵𝟎 𝑾
𝐶𝑁𝑅 is given by:
𝑨𝟐𝒄
𝑪𝑵𝑹 =𝑷=
𝟒𝑵𝟎 𝑾
𝐶𝑁𝑅 = 1563
[𝐶𝑁𝑅] 𝑑𝐵 = 10log10 [1563]
[𝑪𝑵𝑹] 𝒅𝑩 = 𝟑𝟏. 𝟗𝟑 𝐝𝐁

Problem 9.
An FM receiver receives an FM Signal
𝑠(𝑡) = 10cos[(2𝜋 × 108 )𝑡 + 6sin(2𝜋 × 103 𝑡)]
Calculate the figure of merit of this receiver.

Given: 𝑆(𝑡) = [10cos 2𝜋 × 108 𝑡 + 6sin 2𝜋 × 103 𝑡].


Solution:

Comparing 𝑠(𝑡) with standard FM modulated wave,


𝑆(𝑡) = 𝐴𝑐 cos[2𝜋𝑓𝑐 𝑡 + 𝛽sin(2𝜋)
𝛽 =6
3
FOM = 𝛽 2
2
3
= (6)2
2
𝐅𝐎𝐌 = 𝟓𝟒

Problem 10.
An FM signal with a deviation of 75kHz is applied to an FM de-modulator. When the input SNR is
15 dB, the modulating frequency is 10kHz. Estimate the SNR at the de-modulator output.
Given: Δ𝑓 = 75kHz, 𝑓𝑚 = 10kHz.
Solution:

[𝑺𝑵𝑹]𝑰 = [𝑺𝑵𝑹]𝑪 = 𝟏𝟓 𝐝𝐁
[𝑆𝑁𝑅]𝑰 𝒅𝑩 = 15
10log10 [𝑆𝑁𝑅]𝐼 = 15
[𝑺𝑵𝑹]𝑰 = 𝟑𝟏. 𝟔𝟐𝟑

Page | 35
Communication Engineering Unit-3: Noise in Communication System
We know that,
(𝑺𝑵𝑹)𝟎 𝟑 𝟐
= 𝜷
(𝑺𝑵𝑹)𝑰 𝟐
3
(𝑆𝑁𝐵)0 = 𝛽 2 (𝑆𝑁𝑅)𝐼
2
Δ𝑓
𝛽 =
𝑓𝑚
75kHz
=
10kHz
𝜷 = 𝟕. 𝟓
3
(𝑆𝑁𝑅)0 = (7.5)2 × (31.623)
2
(𝑆𝑁𝑅)0 = 2668.19
(𝑆𝑁𝑅)𝟎 𝐝𝐁 = 10log10 (2668.19)
(𝑺𝑵𝑹)𝟎 = 𝟑𝟒. 𝟐𝟔 𝐝𝐁

Problem 11.
A carrier wave of frequency 100kHz is frequency modulated by a sine wave of amplitude 5 V &
frequency 20kHz. Find the figure of merit of the FM receiver if the frequency sensitivity of the
modulator is 10kHz/V.
Given: 𝑓𝑐 = 100kHz, 𝑓𝑚 = 20kHz, 𝐴𝑚 = 5 V.
Solution:
𝑲𝒇 = 𝟏𝟎𝐤𝐇𝐳/𝐕
Δ𝑓 = 𝐾𝑓 𝐴𝑚
= 10kHz/V × 5 V
Δ𝑓 = 50kHz
Δ𝑓
𝛽 =
𝑓𝑚
50kHz
𝛽 =
20kHz
𝜷 = 𝟐. 𝟓
3
FOM = 𝛽 2
2
3
= (2.5)2
2
𝐅𝐎𝐌 = 𝟗. 𝟕𝟑𝟓

Problem 12.
Explain the need of pre-emphasis and de-emphasis in FM systems and show that the
improvement ratio ' 𝐼 ' is given by:
3
𝑊2
( )
𝑓0
𝐼=
𝑊 𝑊
3 [ − tan−1 ( )]
𝑓0 𝑓0
Evaluate the value for a typical commercial FM broadcasting system where 𝑊 = 15kHz and
𝑓0 = 2.1 𝑘𝐻𝑧
Given: 𝑊 = 15kHz & 𝑓0 = 2.1 𝐾𝐻𝑧

Solution: Refer pre-emphasis and De-emphasis.

Note: Set to radian mode and calculate

Page | 36
Communication Engineering Unit-3: Noise in Communication System

𝑾 𝟑
( )
𝒇𝟎
𝑰 =
𝑾 𝑾
𝟑[ − 𝐭𝐚𝐧−𝟏 ( )]
𝒇𝟎 𝒇𝟎
3
15 × 103
( )
2.1 × 103
=
15 × 103 3
−1 ( 15 × 10 )]
3[ − tan
2.1 × 103 2.1 × 103
𝑰 = 𝟐𝟏. 𝟐𝟕

Page | 37

You might also like