0% found this document useful (0 votes)
1 views

Lecture 2

Uploaded by

haircaretik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Lecture 2

Uploaded by

haircaretik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 70

Advanced Digital Communications

EC721
Lecture 2

Prepared by: Dr. Mohamed Abaza


Email: [email protected]
Reference: “Digital Communications by John G. Proakis”
Review of Probabilities and Random
Processes
• What is the difference between Deterministic signals and
Random signals??
• Deterministic Signals are signals that can be expressed as
mathematical formula, (unique value and described by
mathematical functions).These signals don’t well represent
common signals that are at the center of focus in many analog
and digital communication systems.
• Random Signals are signals that are unpredictable and can
not be expressed as mathematical formula (multiple value and
described by probability density functions (pdf) or cumulative
distribution function (cdf)). The random signals could be
functions of one or more random variables.
2
How Can a Signal be Random?
• I am sending the signal and I know what I am sending,
it is deterministic, how come I receive it random?

• The signal is deterministic for the Transmitter BUT Random from


the Receivers point of view. The main reason is the NOISE !! 3
How do we Deal with a Signal if Corrupted by
Noise and what do we Measure?

4
That is difficult how can we deal with an
unpredictable behavior?
• Though random signals are evolving in time in
an unpredictable manner they are described
with statistical averages instead of equations.
1. We will learn how to statistically describe and
model a signal
2. Apply the studied theory to communication
systems

5
Can you think of an example
for a random experiment?
• MATH 6….
– A Random Experiment is an experiment whose
outcome cannot be predicted with certainty
1. Tossing a coin
2. Rolling a dice

6
What is a Random Variable?
• Random Variable (R.V.): is a variable whose possible values are
numerical outcomes of a random phenomenon. There are two
types of random variables, discrete and continuous.
– Discrete Random Variable: A discrete random variable is one which
may take on only a countable number of distinct values such as
0,1,2,3,4,........ (Example: the Friday night attendance at a cinema,
tossing a dice )
– Continuous Random Variable: A continuous random variable can
take on any value along a continuous range (Examples include height,
weight, the amount of sugar in an orange)
• Notations;
– Random Variables are denoted by upper case letters (X)
– Individual outcomes for RV are denoted by lower case letters (x).

7
What is Probability Distribution?
A probability distribution is a mapping of all the possible values of a random variable to their corresponding
probabilities for a given sample space.
OR A probability distribution is a table, equation or a graph that links each outcome of a statistical
experiment with its probability of occurrence. Probability of the
– The probability distribution is denoted as
P( X x) random variable X
equals a particular
– which can be written in short form as
value x
P (x)

• Probability Distributions are either Discrete or Continuous;


– Discrete Probability Distribution : Assigns probabilities(masses) to the individual outcomes.
– Discrete probability is denoted by;
f X ( x ) P ( X  x )

In this case called probability distribution

– Continuous Probability Distribution : Assigns density at individual points, probability of ranges can be obtained by
integrating density function.

– Continuous Probability Distribution is denoted by ; (PDF or pdf)


f X (x)

(Its integration = probability)

– Cumulative Distribution Function; (CDF or cdf forf both discrete


( x)  P( X and
x) continuous RV) 8
X
Cumulative Distribution Function Properties

• Distribution function FX (x) is bounded between zero and


one
• Distribution function FX (x) is a monotone-non decreasing
function of x; that is
FX ( x1 ) FX ( x2 )
if
Probability
x1  x2 of Certain
Event

FX () 1
Probability of
Impossible FX ( ) 0
Event
9
Probability Density Function
• Another way to describe probabilities is through the
derivative of the distribution function
d
Probability f X ( x)  FX ( x)
Density dx
Function
(PDF)

• To get the probability of the event (probability of an interval is the area


under the probability density function in that interval!! )
P( x1  X  x2 ) P( X  x2 )  P( X x1 )
x2


F X ( x2 )  F X ( x1 )  f X ( x)dx
x1

• ALWAYS… non-negative function with a total area of ONE!!


10
Examples of Discrete CDF and PDF

11
Examples of Continuous CDF and PDF

12
What is the Relative Frequency Approach?

13
What are the Axioms of Probability?
• S denotes the SAMPLE SPACE, which
corresponds to the total number of sample
points .
• Sure event: A sure event is an event, which

• The null set 𝜙 is called the NULL or


always happens.

IMPOSSIBLE EVENT
• A single sample point is called ELEMENTRY
POINT
14
What is the Definition of Probability?
P( S ) 1
0 P( A) 1
If A+B is the union of the two MUTUALLY EXCLUSIVE
events, then the intersection equal to zero

P( A  B) P ( A)  P( B)  P( A, B)
P( A, B) P( AB) P( A  B)

15
What are the Properties of Probability?
• Property 1:
P( A) 1  P( A)

– Where A (denoting “not A”) is the complement of


A
• This property helps to investigate the nonoccurrence of
an event .

16
What are the Properties of Probability?
• Property 2:
– If M mutually exclusive events A1 , A2 , A3 ,……, AM have the
exhaustive property ;
A1 + A2 + A3 +……+ AM = S
Then,
P( A1 )  P( A2 )  P( A3 )  ......  P( AM ) 1

– When the M events are equally likely ( they


have equal probabilities, then;

1
P ( Ai )  , i 1,2,3,....., M
M
17
What are the Properties of Probability?
• Property 3:
– When the events A and B are not mutually
exclusive, then the probability of the union event
“A or B” equals;
P ( A  B ) P ( A)  P ( B )  P ( AB)

– Where P(AB) is the probability of the joint event


“A and B”

18
What is the Joint Probability?
• P(AB) is referred to as the joint probability,
has the following relative frequency
interpretation;
• Where Nn(AB) denotes the number of events A
and B occur simultaneously in an n trails of the
experiment.

19
What is the Conditional Probability and
Bayes’ Theorem?
• P(A/B) or P(B/A) are referred to as conditional probabilities.
• Suppose we perform an experiment that involves a pair of events
A and B… P(B/A) denote the probability of event B, given that
event A has occurred.
– Provided that A has non zero probability The conditional probability
P(B/A) is defined by

PB A

P ( AB)
P ( A)
Bayes’ Theorem

P( AB) P A  B P( B) PB AP( A)


P B 
 B
PAP( B)
A P ( A) 20
Set Theory and Probabilities
• Let A and B be events in a sample space S (contains all outcomes, P(S)=1).
– Union: The union of A & B (AB) is the event consisting of all outcomes in A or B
– Intersection: The intersection of A & B (AB) is the event consisting of all
outcomes in A and B
– Complement: The complement of A ( ) is the set of outcomes in S not
contained in A

A
Mutually exclusive: If A & B have no outcomes in common they are mutually
exclusive

21
Set Theory and Probabilities

22
Example of Intersected Events

23
What is Statistical Average and How to
Compute it?
• Statistical average determines the average behavior of
the outcomes arising in a random experiment…. Known
as the Expected value, Mean and First moment.
Recall the time average of a signal x(t)

T
2
1
x(t ) T lim 
T x(t )dt
T

2

24
What is Statistical Average and How to
Compute it?
• The Statistical average for a random variable X
is defined by;

m X E[ X ]  X  xf X ( x)dx  x(t )


• How about the Statistical average for any


function of the random variable X (F(X))?

E[ F ( X )]  F ( X ) f X ( x)dx


25
Function of a Random Variable
• Let X denote a random Variable, and g(X) denote a real-
valued function of that random variable.
Y is a
Y g ( X ) Random
Variable!!

• The question is how to find the expected value of Y



E[Y ]  yf y ( y )dy


E[ g ( X )]  g ( x) f X ( x)dx
 26
How to Compute Moments?
• The nth moment of the probability distribution
of the random variable X is obtained by;

n=1…..mean,
E[ X ]  x f X ( x)dx
n n
n=2…..mean-
 square value of X

27
How about the Central Moment?
• Simply defined as the moments of the
difference between the random variable X and
its mean.

E[( X   X ) n ]  ( x   X ) n f X ( x)dx


For n=1 the central For n=2 the 2nd central moment
moment is ?? is referred to as the
VARIANCE!!

28
What is the Variance?
• The variance of a random variable X is some sense
is a measure of the variable’s randomness!!
• By specifying the variance, we essentially constrain
the effective width of the probability density
function of the random variable about the mean.

var[ X ]  X2 E[( X   X ) 2 ]  ( x   X ) 2 f X ( x)dx


The square root of the variance is


known as the standard deviation.

29
What is the Relation between the Variance
and the Mean-Square Value?

 E[( X   X ) ]
2
X
2

2
E[ X  2  X X   X ]
2

2
E[ X ]   X
2

Note that we applied the


linearity property of the
What if the mean is expectation operator!!
zero??

30
What is the Characteristic Function?
• It is the inverse Fourier Transform of the
probability density function, given by;
v ==2πf
 X (v) E[exp( jvX )] and x == t

 f X ( x) exp( jvx )dx


What if we are given the characteristic


function, how do we get the
probability distribution function?

1
f X ( x) 
2 

X (v) exp( jvx )dv
31
What are Joint and
Conditional Density Functions?
• Consider the joint and conditional distribution
of 2 random variables;
Joint CDF;
FX ,Y ( x, y ) Pr[ X  x, Y  y ]
x y
FX ,Y ( x, y )  f XY ( x, y )dxdy
  

Joint PDF;

 
f X ,Y ( x, y )  FXY ( x, y )
x y

32
Properties of Joint Distributions?
• Extending the previous properties to the joint
case, it follows;
FXY ( , ) 0
FXY ( x, ) 0
FXY ( , y ) 0
FXY ( x, ) FX ( x)
FXY (, y ) FY ( x) Marginal Distributions;
FXY (, ) 1 

Also, f X ( x)  f X ,Y ( x, y )dy

 

f X ,Y ( x, y)dxdy 1
  
fY ( y )  f X ,Y ( x, y )dx

33
How about the Conditional Densities?
Bayes’ Rule
f X ,Y ( x, y )
f X ( x / y) 
Y fY ( y )

What if x and y are


independent?

Conditional density is
f X ( x / y )  f X ( x) identical to
Y unconditional density

34
What are Joint Moments?
• Consider a pair of random variables X and Y, the
statistical averages of importance in this case are
the joint moments, given by;
 
E[ X iY k ]  x i y k f X ,Y ( x, y )dxdy
  

When i=k=1 the joint Random variables X and


moment is known as the Y are orthogonal if and
correlation defined by E[XY] only if E[XY]=0
If independent then
E[XY]=E[X]E[Y]
35
How about the Covariance?
• It is the correlation of the centered random
variables, given by;
cov[ XY ] E[( X  E[ X ])(Y  E[Y ])]
E[ XY ]   X Y

• What is the Correlation Coefficient?


cov[XY ] Random variables X and Y are
 uncorrelated if and only if cov[XY]=0…..
 XY Two statistically independent random
variables are uncorrelated BUT the
converse is not true!!
36
Some Probability Distribution
• There are different shapes for the probability distribution;
Uniform, Normal, Exponential, Poisson, Binomial,…etc
• The probability of the random variable assuming a value
within some given interval from x1 to x2 is defined to be
the area under the graph of the probability density
function between x1 and x2

37
Uniform Distribution
• A random Variable X is said to be uniformly
distributed over the interval (a,b) if its
probability distribution function is;

Where a is the smallest value a variable can


assume, and b is the largest value the variable can
assume.
38
Uniform Distribution

Expected Value of x E[ X ] (a  b)


2
2
Variance of x var[ X ] (b  a )
12
39
Normal (Gaussian) Distribution
• It is the most important distribution used in
describing a continuous random variable.
• Used in lots of applications… heights of people, test
scores as well as scientific measurements.
• The probability distribution is given by;

40
Normal Distribution Main Characteristics

• The distribution is symmetric and it has a bell


shape.
• The entire family of normal probability
distributions is defined by its mean and its
standard deviation

41
Normal Distribution Main Characteristics
• Two normal curves with different means (but the same standard deviation)
–The curves are shifted left and right

• Two normal curves with different standard deviations (but the same
mean)
– The curves are shifted up and down

42
How to get the probability for a Normal
Distribution?

43
44
Exponential Distribution
• The probability density function is given by;

1 x

f X ( x)  e .......x 0and  0

45
Rayleigh Distribution
• Used widely in wireless communication to
model the effect of fading a signal undergoes.
• The distribution function is defined as follows:
x   x2 
f X ( x)  2 exp .....x 0,   0
2 
  2 
 
E[ X ]   2 ;Var [ X ]  2   2
 2

46
Why Transformation of Random Variables
and How?
• A problem that often arise in the statistical
characterization of communication systems is
that of determining the pdf of a random
variable y related to another random variable
x by the transformation:
y=g(x)
• There are 2 cases to be considered one-to-one
(monotone) transformation or one-to-many
(non-monotone) transformation.
47
Why Transformation of Random Variables
and How?
• In case of one-to-many transformation and to
obtain the pdf of y from x given that y=g(x), we do
the following :
– Find the roots of y=g(x)
– Find dy/dx
– Calculate the following equation at every root

f x ( xk ) Jacobian of the
Number of f y ( y )  transformation
roots k dy
dx xk g  1 ( y )

How about the one-to-one case? How can we change the above
equation?? 48
Why Transformation of Random Variables
and How?

49
Why Transformation of Random Variables
and How?

50
What is a Random Process?
• It is an extension of the concepts associated
with random variables when the time
parameter is brought into account.
• Random processes have 2 properties:
– They are functions of time
– Before conducting an experiment, it is not possible
to exactly define the waveform that will be
observed in the future.

51
What is a Random Process?

Fig 1- An Ensemble of Sample Function 52


What is a Random Process?

53
Terminologies Used
in Random Processes
• For a fixed sample point si the graph of the
function X(ti, s) versus the time t is called a
Realization or Sample Function of the random
process, for simplicity denoted by:
X i (t )  X (ti , s )  X (ti )

54
Then what is the Definition
of a Random Process?

• Note that the probability density function of


the random process and the joint pdf depends
on time.

55
Difference Between Random Variable and
Random Process

• For a Random Variable, the outcome of a


random experiment is mapped into a number
• For a Random Process, the outcome of a
random experiment is mapped into a
waveform that is a function of time.

56
How to Compute the Mean of
a Random Process?

• The mean (ensemble average or expected


value) of a random process X(t) is defined by:

 X (t ) E[ X (t )]  xf X ( t ) ( x)dx (at a given time t)


• While the nth order moment of a random process is


given by:

 n (t ) E[ X n (t )]  x n f X ( t ) ( x)dx
X (at a given time t)

57
How to Compute the Autocorrelation of a
Random Process?
• An autocorrelation function provides a means of describing the
interdependence of two random variables obtained by observing a random
process X(t) at times τ seconds apart.
• The autocorrelation function of the process X(t) is the expectation of the
product of 2 random variables X(t1) and X(t2) obtained by observing the
process X(t) at times t1 and t2 respectively.
RX (t1 , t 2 ) E[ X (t1 ) X (t 2 )]
 
 x1 x2 f X (t1), X ( t 2 ) ( x1 , x2 )dx1dx2 What if we have a
   Strictly Stationary
Random
It will only depend on the time difference t2 –t1
Process??

RX (t1 , t 2 ) RX (t 2  t1 )
58
Properties of Autocorrelation Function
• The autocorrelation function of a stationary process
can be redefined as follows:
RX ( ) E[ X (t   ) X (t )]
• Properties:
by setting τ=0 the mean squared value of the process is obtained
RX (0) E[ X 2 (t )]
The autocorrelation function is an even function
RX ( ) RX (  )
The autocorrelation function has its maximum magnitude at τ=0
RX ( ) RX (0) The more rapidly the random
process changes with time the
more rapidly the
autocorrelation decrease from
its max value
59
How to Compute the Auto covariance of a
Random Process?
• The auto covariance of a random process X(t)
is computed as follows:

C X (t1 , t 2 ) E[( X (t1 )   X )( X (t 2 )   X )]


What if we have a
Strictly Stationary
Random
RX (t 2  t1 )   X2 Process??

• It is obvious from the above equation that like the autocorrelation function
the auto covariance for a strictly stationary random process depends only on
the time difference!!
•The mean and the autocorrelation function are sufficient to describe the first 2
moments of the process 60
How to Compute Cross-Correlation
Functions?
• Consider the more general case of two random processes
X(t) and Y(t) with autocorrelation Rx(t,u) and RY(t,u). The
two cross correlation functions are defined as:
RXY (t , u ) E[ X (t )Y (u )]
RYX (t , u ) E[Y (t ) X (u )]

Where t and u are the two values of time


at which the processes are observed

• It is convenient to two display the correlation properties of 2


random processes X(t) and Y(t) in a matrix form, as follows
 RX (t , u ) RXY (t , u )
R (t , u )  
 RYX (t , u ) RY (t , u ) 
61
Properties of Cross-correlation Function
• Unlike the autocorrelation function, it is not
generally an even function of τ.
• Also it doesn’t have a maximum at the origin
• BUT it obeys a certain symmetry relationship as
follows:
RXY ( ) RYX (  )

62
What is a Stationary Process?
• To define a stationary process, consider a random process X(t) that is initiated at
t=-∞, let X(t1), X(t2),…., X(tk) denote the random variables obtained observing
the random process at different times. The joint distribution of the random
variables is fX(t1),…., X(tk)(x1,…., xk).
• If all the observations are shifted in time X(t1+τ), X(t2+τ),…., X(tk+τ), thus the joint
pdf is fX(t1+τ),…., X(tk+τ)(x1,…., xk).
• The random process is said to be stationary in the Strict Sense or strictly
Stationary (SSS) if:
fX(t1+τ),…., X(tk+τ)(x1,…., xk) = fX(t1),…., X(tk)(x1,…., xk).
– In other words the random process is strictly stationary if the joint distribution of any
set of random variable obtained by observing it is invariant with respect to the location
of the origin t=0.
– If all statistical moments are independent of time to the order N--∞
• The random process is said to be Wide Sense Stationary (WSS) if only the first 2
moments (N=2) are independent of time.
E[ X (t )]  X Const
RX (t1 , t 2 ) RX ( ) 63
What is an Ergodic Process?
Ensemble average across
the random process (all
sample functions)

E
n
s
e
m
b
l
Time average along the
e
random process (one
sample functions)
A SINGLE SAMPLE FUNCTION REPRESENTS ALL 64
SAMPLE FUNCTIONS
What is an Ergodic Process?
• Ergodic process is a subset of a stationary process, in other words a random
process to be ergodic , it has to be stationary: however the converse is not
necessarily true!!
• Ergodicity could be defined in the most general sense by considering higher
order statistics
• A stationary random process X(t) is said to be ergodic if the time averages of
any sample function is equal to the corresponding ensemble averages
(expectation)---- Known as Mean Ergodicity.
x (t ) E[ X (t )]
T
2 
1
lim
T  T x(t )dt  xf
T 
X ( x ) dx

2

• How about autocorrelation ergodicity?


T
2
1
lim
T  T x(t ) x(t   )dt E[X (t ) X (t   )]
T

2
65
How to Compute the PSD of a Random
Process?
• PSD is a measure of the frequency distribution of a
random variable
• Similar to the Fourier transform which defines the
frequency response of deterministic signal, PSD Sx(f) is
used to define the frequency response of a random signal

66
Properties of PSD
• Properties are:

67
68

1
50  cos( 400t  200  2 )d 0

2

69
70

You might also like