0% found this document useful (0 votes)
129 views77 pages

Lecture On Stochastic Processes

Uploaded by

yahyaam1804
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
129 views77 pages

Lecture On Stochastic Processes

Uploaded by

yahyaam1804
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/323572616

Lecture Notes on in Stochastic Processes

Book · March 2018

CITATIONS READS

2 5,529

5 authors, including:

Hussein Yousif Eledum


University of Tabuk
21 PUBLICATIONS 28 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

now I'm working in projection of social media messages using branching process View project

All content following this page was uploaded by Hussein Yousif Eledum on 06 March 2018.

The user has requested enhancement of the downloaded file.


UNIVERSITY of Tabuk ‫جامعة تبوك‬

Faculty of Science ‫كلية العلوم‬

Department of Stat. ‫قسم االحصاء‬

‫محاضرات في العمليات التصادفية‬

Lecture Notes on
Stochastic Processes
STAT 321

Dr.Hussein Yousif Eledum


Associate Professor in Applied Statistics at University of Tabuk, Department of
Statistics

2018
Table of Contents

Page #

1. Lesson 1: Review of Probability 1

2. Lesson 2: Definition of Stochastic Process 8

3. Lesson 3: Characterization of Stochastic Processes 10

4. Lesson 4: Classification of Stochastic Processes 14

5. Solved Problems (1) 19

6. Solutions (1) 20

Markov Chains

7. Lesson 5: Discrete – Time Markov Chains 23

8. Lesson 6: Higher Transition probability matrix and Probability Distributions 27

9. Lesson 7: Stationary Distribution and Regular Markov Chain 31

10. Lesson 8: Classification of States 34

11. Solved Problems (2) 42

12. Solutions (2) 46

Poisson Processes

13. Lesson 9: Counting Process 53

14. Lesson 10: Poisson Process 56

15. Solved Problems (3) 58

16. Solutions (3) 60

Branching Processes

17. Lesson 11: Branching Process 62


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Iesson1: Review of Probability

Random variable
A random variable is a real valued function whose numerical value is determined by the
outcome of a random experiment. In other words the random variable X is a function that
associated each element in the sample space Ω from with the real numbers (i.e. X : Ω  )
Notation:
(capital letter): denotes the random variable .
(small letter): denotes a value of the random variable X.
Discrete random variable
A random variable X is called a discrete random variable if its set of possible values is
countable (integer).
Continuous random variable
A random variable X is called a continuous random variable if it can take values on a
continuous scales.
Discrete probability distribution
If is a discrete random variable with distinct values , then the function
( )
( ) {

Is defined to be the probability mass function pmf of X .


This means that a discrete random variable is a listing of all possible distinct (elementary)
events and their probabilities of occurring for a random variable.
.............
( ) ( ) ............. ( )
The pmf ( ) is a real valued function and satisfies the following properties:
3. ( ) 2. ∑ ( ) 3. ( ) ∑ ( ) where is
Continuous probability distribution
The probability density function pdf of a continuous random variable X is a mathematical
function ( ) which is defined as follows:
( )
( ) {

The pdf ( ) is satisfies the following properties:

1. ( ) 2. ∫ ( ) 3. ( ) ∫ ( ) ,

1 Lesson 1: Review of Probability Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Cumulative distribution function (CDF)


For any random variable we define the cumulative distribution function cdf , ( ) by:
( ) ( )
Where, x is any real value.
∑ ( )
( ) ,
∫ ( )
( ) is monotonic increasing i.e.
( ) ( )
And the limit of ( ) to the left is 0 and to the right is 1:
( ) ( )

For a continuous case :


1. Pa  X  b  P X  b  P X  a   F b  F a 
dF x 
2. f(x) 
dx
Mathematical Expectation
Let X be a random variable with a probability distribution ( ) the expected value (mean)
of X is denoted by E(X) or and is defined by:
  x f(x) ; if X is discrete

all x
E(X)  μX   
  x f(x)dx ; if X is continuous

 
Linear property:
Let X be a random variable with the pdf f(x), and let a and b are a constants, then:
( ) ( )
The variance
Let X be a random variable with a probability distribution ( ) the variance of X is
denoted by ( ) or and is defined by:
 (x  μ)2 f(x) ; if X is discrete
all x
Var(X) σ X2  E[(X  μ)2 ]   
  (x  μ)2 f(x)dx ; if X is continuous
 

and it's also can be written as:


( ) ( )

2 Lesson 1: Review of Probability Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Linear property:
Let X be a random variable with the pdf f(x), and let a and b are a constants, then:
( ) ( )
The moments:
Let X be a random variable with the pdf f(x), the rth moment about the origin of X, is
given by:
∑ ( )
́ ( ) ,
∫ ( )
if the expectation exists
As special case:
́
Let X be a random variable with the pdf f(x), the rth central moment of X about , is
defined as:
∑ ( ) ( )
( ) ,
∫ ( ) ( )
As special case:
( ) the variance of X.
Moment- Generating Function MGF:
Let X be a random variable with the pdf f(x), the moment - generating function of X, is
given by ( ) and is denoted by ( ) . Hence :
∑ ( )
( ) ( ) ,
∫ ( )
Moment-generating functions will exist only if the sum or integral of the above definition
converges. If a moment-generating function of a random variable X does exist, it can be
used to generate all the moments of that variable.
Definition:
Let X be a random variable with moment - generating function of X, is given by ( ).
then :
( ) ( )
| ́ Therefore, | ́

( )
| ́ ́ ́

Example. Find the moment-generating function of the binomial random variable X and
then use it to verify that and .

3 Lesson 1: Review of Probability Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

. /
( ) ,

( ) ∑ . / =∑ . /( )

Recognizing this last sum as the binomial expansion of ( ) , we obtain:


( ) ( )

( )
now: ( )

( )
and: , ( )( ) ( ) -

Setting , we get: ́ and : ́ , ( ) -

Therefore, ́

́ ́ , ( ) - ( )

Probability Generating Function PGF:


Let X be a random variable defined over the non-negative intergers. The probability
generating function PGF is given by the polynomial

( ) ( ) ∑ ( )

Example. Let X have a binomial distribution function such that ( ). The PGF is
given by

( ) ( ) ∑ . /( ) ( )

An important property of a PGF is that it converges for |s| ≤ 1 since


( ) ∑ ( ) .
The PGF can be used to directly derive the probability function of the random variable, as
well as its moments. Single probabilities can be calculated as
( )
( ) ( ) |

Example: A binomial distributed random variable has PGF ( ) ( ) . Thus,


( ) ( )
( ) ( )
( ) ( ) ( ) ( ) ( )

4 Lesson 1: Review of Probability Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

The expectation ( ) satisfies the relation

( ) ∑ ( ) ( )

Example: A binomial distributed random variable has mean


( ) ( )
Calculating first

, ( )- ∑ ( ) ( ) ( )

the variance is obtained as


( ) ( ) ( )
( ) , ( )- ( ) , ( )- [ ]
Example: A binomial distributed random variable has variance
( ) ( ) , - ( )

Joint and marginal probability Distributions


Joint probability distribution (Discrete case)
If X and Y are two discrete random variables, then ( ) ( ) is called
joint probability mass function jpmf of X and Y, and ( ) has the following properties:
1. ( ) for all x and y . 2. ∑ ∑ ( )

3. ,( ) - ∑∑ ( ) for any region in the plane.


Marginal probability distribution (Discrete case)
If X and Y are jointly discrete random variables with the jpmf ( ), then ( ) and
( ) are called marginal probability mass functions of X and Y respectively which can be
calculated as
- ( ) ∑ ( ) - ( ) ∑ ( )
Joint probability distribution (Continuous case)
If X and Y are two continuous random variables, then ( ) ( ) is
called joint probability density function jpdf of X and Y, and ( ) has the following
properties:
1. ( ) for all x and y .
2. ∫ ∫ ( )

3. ,( ) - ∫ ( ) for any region in the plane.

5 Lesson 1: Review of Probability Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Marginal probability distribution (Continuous case)


If X and Y are jointly continuous random variables with the j.p.d.f ( ), then ( ) and
( ) are called marginal probability density function of X and Y respectively which can
be calculated as
- ( ) ∫ ( )

- ( ) ∫ ( )

Conditional Distributions and conditional Expectation


Conditional distribution
If X and Y are jointly random variables discrete or continuous with the jpf ( ), ( )
and ( ) are marginal probability distributions of X and Y respectively, then the
conditional distribution of the random variable given that is
( )
( ) ( )
( )
Similarly the conditional distribution of the random variable given that is
( )
( ) ( )
( )
Statistical independence
If X and Y be two random variables discrete or continuous with the jpf ( ), ( ) and
( ) are marginal probability distributions of X and Y respectively. The random variables
X and Y are said to be statistically independent if and only if:
( ) ( ) ( )
for all ( ) within their ranges.
Conditional Expectation
If X and Y are jointly random variables discrete or continuous with the jpf ( ), ( )
and ( ) are marginal probability distributions of X and Y respectively, then the
conditional expectation of the random variable given that for all values of such
that ( ) is
∑ ( )
( ) ,
∫ ( )
Note that ( ) is a function of Y.
Covariance
Let X and Y be a random variables with joint probability distribution ( ) the
covariance of X and Y which denoted by ( )) or is :

6 Lesson 1: Review of Probability Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

∑ ∑( )( ) ( )

( )( )
∫ ∫( )( ) ( )
{
The alternative and preferred formula for is:
( )
Linear combination
Let X and Y be a random variables with joint probability distribution ( ), and are
constants, then
( ) ( ) ( ) ( )
If X and Y are independent random variables, then
( ) ( ) ( )
Correlation coefficient
Let X and Y be two random variables with covariance and standard deviations and
respectively. The correlation coefficient of X and Y is

7 Lesson 1: Review of Probability Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Lesson2: Definition of Stochastic Process

Definition
A stochastic process (random process) is a family of random variables, * ( ) + or
* + That is, for each t in the index set T, ( ) is a random variable.
Random process also defined as a random variable which a function of time t, that means ,
( ) is a random variable for every time instant t or it’s a random variable indexed by
time.
We know that a random variable is a function defined on the sample space . Thus a
random process * ( ) + is a real function of two arguments * ( ) +.
For fixed ( ), ( ) ( ) is a random variable denoted by ( ), as varies
over the sample space . On the other hand for fixed sample space , ( )
( ) is a single function of time , called a sample function or a realization of the process.
The totality of all sample functions is called an ensemble.
If both and are fixed, ( ) is a real number. We used the notation ( ) to
represent ( ).
Description of a Random Process
In a random process * ( ) + the index t called the time-parameter (or simply the
time) and T called the parameter set of the random process. Each ( ) takes values
in some set S called the state space; then ( ) is the state of the process at time t, and
if ( ) we said the process in state at time .
Definition:-
* ( ) + is a discrete - time (discrete parameter) process if the index set T of the
random process is discrete . A discrete-parameter process is also called a random sequence
and is denoted by * ( ) + or * +.
In practical this generally means T={1,2,3,…….}.
Thus a discrete-time process is * ( ) ( ) ( ) +: a new random number recorded at
every time 0, 1, 2, 3, . . .
Definition:-
* ( ) + is continuous - time (continuous parameter) process if the index set T is
continuous .
In practical this generally means T = [0, ∞), or T = [0,K] for some K.

8 Lesson 2: Definition of Stochastic Process Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Thus a continuous-time process * ( ) + has a random number ( ) recorded at every


instant in time.
(Note that ( ) needs not change at every instant in time, but it is allowed to change at any
time; i.e. not just at t = 0, 1, 2, . . . , like a discrete-time process.)
Definition:-
The state space, S: is the set of real values that ( ) can take.
Every ( ) takes a value in , but S will often be a smaller set: . For example, if
( ) is the outcome of a coin tossed at time t, then the state space is S = {0, 1}.
Definition:-
The state space S is called a discrete-state process if it is discrete , often referred to as a
chain. In this case, the state space S is often assumed to be * +If the state space S is
continuous then we have a continuous-state process.
Examples:
Discrete-time, discrete-state processes
Example 1: Tossing a balanced die more than once, if we interest on the number on the
uppermost face at toss n , say ( ) the number appears on the first toss , ( ) number
appears in the second one ,…………….. ect, then * ( ) + is the random process,
and the random variable ( ) denotes the number appears at toss n . where n is the
parameter. * + and * +.

Example2:The number of emails in your inbox at time t . * + and


* +.

Example 3: your bank balance on day t .

Example 4: the number of occupied channels in a telephone link at the arrival time of the
nth customer, n = 1,2,...
Continuous-time, discrete-state processes
Example 6: The number of occupied channels in a telephone link at time t > 0
Example 7: The number of packets in the buffer of a statistical multiplexer at time t > 0

9 Lesson 2: Definition of Stochastic Process Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Lesson 3: Characterization of Stochastic Process

Distribution function CDF and Probability distribution PDF for ( ) :


Consider the stochastic process * ( ) } , for any , ( ) is a random
variable, and it’s a CDF ( )( ) or ( ) is defined as:
( ) ( ( ) )
( )is known as a first - order distribution function of the random process ( ) .
Similarly, Given and , ( ) and ( ) represent two random variables
their joint CDF ( ) ( )( ) or ( ) is given by
( ) ( ( ) ( ) )
( ) is known as the second - order distribution of ( ).
In general we define the nth-order distribution function of ( ) by
( ) ( ( ) ( ) ( ) )
Similarly, we can write joint PDFs or PMFs depending on whether ( ) is continuous-
valued (the ( )'s are continuous random variables) or discrete-valued (the ( )'s are
discrete random variables). For example the second – order PDF and PMF given
respectively by
( ) ( )( )
( ) ( ( ) ( ) )=

( ) ( ( ) ( ) )
Mean and Variance functions of random process:
As in the case of r.v.'s, random processes are often described by using statistical averages.
For the random process * ( ) }, the mean function ( ) is defined as

( ) , ( )- ∫ ( )

The above definition is valid for both continuous-time and discrete-time random processes.
In particular, if * ( ) + is a discrete-time random process, then
( ) , ( )-
The mean function gives us an idea about how the random process behaves on average as
time evolves (a function of time). For example, if ( ) is the temperature in a certain city,
the mean function ( ) might look like the function shown in Figure below. As we see,
the expected value of ( ) is lowest in the winter and highest in summer.

10 Lesson 3: Characterization of Stochastic Process Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

The variance of a random process ( ), also a function of time, given by:


( ) , ( )- , ( ) ( )- , - , ( )-
Autocorrelation, and Covariance Functions:
The mean function ( ) gives us the expected value of ( ) at time , but it does not give
us any information about how ( ) and ( ) are related. To get some insight on the
relation between ( ) and ( ), we define correlation and covariance functions.
Given two random variables ( ), ( ) the autocorrelation function or simply
correlation function ( ), defined by:

( ) , ( ) ( )- ∫ ∫ ( )

Where ( ) is a joint probability function for and .


For a random process, and go through all possible values, and therefore,
, ( ) ( )- can change and is a function of and .
Note that:
( ) ( )
The autocovariance function of ( ) is defined by:
( ) , ( ) ( )- [( ( ) ( ))( ( ) ( ))]
( ) ( ) ( )
It is clear that if the mean of ( ) is zero, then ( ) ( ).
If we obtain
( ) ( ) , ( ) ( )- , ( )-
( ) ( ) , ( ) ( )-
( ( ))
The normalized autocovariance function is defined by:
( )
( )
√ ( ) ( )

11 Lesson 3: Characterization of Stochastic Process Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Example: wireless signal model


Consider RP ( ) ( ( ) )
Where : amplitude (capacity) ( ): carrier frequency : phase
( ) that is ( )

: are constant ( ): function of a time


Find
i. Mean function of ( )
ii. Autocorrelation function of ( )
Solution
i. Mean function of ( )
( ) , ( )- * ( ( ) )+ ∫ ( ( ) ) ( )

∫ ( ( ) ) ∫ ( ( ) )

( ( ) )| * ( ( ) ) ( ( ) )+

* +

( )
ii. Autocorrelation function of ( )
( ) * ( ) ( )+
Let and time shift
( ) *, ( () )-, ( ( ) )-+
*, ( () )-, ( ( ) )-+

( ) ( ) , ( ) ( )-

Let ( ) ( ) then
( ) ( )

( ) { ( ( ) ) ( ( ))}

( * ( ( ) )+ { ( ( ))})

The first term is 0 , and { ( ( ))} ( ( )) is the constant (no )

( ) ( ( ))

12 Lesson 3: Characterization of Stochastic Process Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Example:
| |
A random process * ( ) } with ( ) and ( )
Determine the mean, the variance and the covariance of the random variables ( )
and ( )
Solution:
( ) , ( )- ( ) ( ) , ( )- ( )
( ) *, ( )- + * , ( )-+
since ( ) *, ( )- +
( ) ( ) * ( )+ ( )
| |

Similarly,
( ) ( ) * ( )+ ( )
| |

, ( ) ( )- ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( )
Since, ( ) | |

( )

13 Lesson 3: Characterization of Stochastic Process Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Lesson 4: Classification of Stochastic Processes

We can classify random processes based on many different criteria.


Stationary and Wide-Sense Stationary Random Processes
A. Stationary Processes:
A random process * ( ) } is stationary or strict-sense stationary SSS if its statistical
properties do not change by time. For example, for stationary process, ( ) and ( )
have the same probability distributions. In particular, we have
( ) ( )
More generally, for stationary process a random * ( ) }, the joint distributions of
the two random variables ( ), ( ) is the same as the joint distribution of ( ),
( ), for example, if you have stationary process ( ), then
[( ( ) ( )) ] [( ( ) ( )) ]
For any set of .
In short, a random process is stationary if a time shift does not change its statistical
properties.
Definition. A discrete-time random process * ( ) } is strict-sense stationary or
simply stationary if, for all and all , the joint CDF of
( ), ( ), ……, ( )
Is the same CDF as
( ), ( ) ( )
That is, for real numbers we have
( ) ( )
This can be written as
( ) ( ) ( )( ) ( ) ( ) ( )( )
Definition. A continuous-time random process * ( ) } is strict-sense stationary or
simply stationary if, for all and all , the joint CDF of
( ), ( ), ……, ( )
Is the same CDF as
( ), ( ) ( )
That is, for real numbers we have

14 Lesson 4: Classification of Stochastic Processes Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

( ) ( )
This can be written as
( ) ( ) ( )( ) ( ) ( ) ( )( )
B. Wide-Sense Stationary Processes :
A random process is called weak-sense stationary or wide-sense stationary (WSS) if its
mean function and its autocorrelation function do not change by shifts in time. More
precisely, ( ) is WSS if, for all ,
1. , ( )- , ( )- constant (stationary mean in time)
For and time shift
2. ( ) , ( ) ( )- ( )
Note that the first condition states that the mean function ( ) is not a function of time ,
thus we can write ( ) . The second condition states that the correlation
function ( ) is only a function of time shift and not on specific times .
Definition
A continuous-time random process * ( ) } weak-sense stationary or wide-sense
stationary (WSS) if
1. ( )
2. ( ) ( ) ( )
Definition
A discrete-time random process * ( ) } weak-sense stationary or wide-sense
stationary (WSS) if
1. ( )
2. ( ) ( )
Note that a strict-sense stationary process is also a WSS process, but in general, the
converse is not true.
Example: wireless signal model
Consider RP ( ) ( ( ) )
Where : amplitude (capacity) ( ): carrier frequency : phase
( ) that is ( )

: are constant ( ): function of a time


Show that ( ) is WSS.

15 Lesson 4: Classification of Stochastic Processes Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Solution
The Mean function of ( ) is ( ) constant

The autocorrelation function ( ) ( ( )) function of

Since ( ) is a constant doesn’t depend on time and the ( ) depends only on


time shift , therefore, ( ) is WSS.
Example:
Consider RP ( ) ( ( ) ) is a r.v. with mean and variance ,
( ). and are independent. Find
i. Mean function of ( )
ii. Autocorrelation function of ( ) and
iii. Show that ( ) is WSS
Solution:
i. Mean function:
( ) , ( )- * ( ( ) )+ * + * ( ( ) )+ Independent

∫ ( ( ) ) ( ) ∫ ( ( ) )

( ( ) )|

, ( ( ) ) ( ( ) )-

( ) ( ) . / . /

Let ( ) ( ) ( )
( ( ) ) ( ( ) ) ( ( )) ( )

, ( ( ) ) ( ( ) )- ( ( )) ( )

Since ( ) therefore
( ( )) ( )

( )
ii. Correlation function:
( ) * ( ) ( )+ * ( ( ) ) ( ( ) )+
Because Independent
* ( ( ) ) ( ( ) )+ * + * ( ( ) ) ( ( ) )+

16 Lesson 4: Classification of Stochastic Processes Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

( ) ( ) , ( ) ( )-

Let ( ) ( ) then
( ) ( )
Therefore
* + * ( ( ) ) ( ( ) )+

* + [ ( ( )) ( ( ) )]

* +[ { ( ( ))} * ( ( ) )+]

The second term is zero.

( ) * + ( ( )) ( ( ))

( ) is WSS random process because the mean function is a constant (=0) and the
autocorrelation function is only a function of a time difference .

Independent and independent identically distributed iid Random Processes

A. Independent Processes:
In a random process ( ), if ( ) for are independent r.v.'s, so that for

( ) ∏ ( )

and ( ) ∏ ( )
Or ( ( ) ( ) ( ) )
( ( ) ) ( ( ) ) ( ( ) )
then we call ( ) an independent random process. Thus, a first-order distribution is
sufficient to characterize an independent random process ( )

B. Independent and identically distributed iid random process


A Random process * ( ) } is said to be independent and identically distributed (iid)
if any finite number, say k, of random variables ( ), ( ), ……, ( ) are mutually
independent and have a common cumulative distribution function ( ) . The joint cdf and
pdf for ( ), ( ), ……, ( ) are given respectively by:

17 Lesson 4: Classification of Stochastic Processes Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

( ) ∏ ( )

( ) ∏ ( )

Example.
Consider the random process * + in which are iid standard normal
random variables.
(a) Write down ( ) for
(b) Write down ( ) for
Solution.
(a) Since ( ), we have

( )

(b) If , then are independent (because of the i.i.d. assumption) so,


( ) ( ) ( )

√ √
( )

18 Lesson 4: Classification of Stochastic Processes Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Solved Problems (1)

Problem 1
Let be a sequence of iid random variables with mean , - and , -
Define the discrete time random process * + as

(a) Find the mean function


(b) Find the autocorrelation function and covariance function for
Problem 2
Consider the random process ( ) for ( ).
(a) Find the mean function
(b) Find the autocorrelation function and covariance function for
Problem 3
Consider the random process * ( ) } defined as
( ) ( )}
where ( ). Show that ( ) is a WSS process.
Problem 4
Given a random process * ( ) } with ( ) and ( )
| |
. Suppose ( ) and ( ) Find:

(a) ( ).
(b) ( ).
(c) ( )

19 Solved Problems (1) Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Solutions (1)

Problem1 (Solution)
(a) , -
, -
, - , - , -

(b) Let
( ) , -
,( )( )-
, - , - , -
[ ] , - [ ] , - , -
( ) , - , - , -

Similarly for
( ) , -
,( )( )-
, - , - , -
, - , - , -

Problem2 (Solution)
(a) Let so, ( ).
, -
, -

, -

,( ) ( ) - * +

20 Solution (1) Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

(b) ( ) , -
, -
, -

, -

,( ) ( ) - * +

To find covariance function


( ) ( ) , - , -

,( ) ( ) -

,( ) ( ) -,( ) ( ) -
( )( )
Problem 3 (Solution)
We need to check two conditions
1. ( ) and
2. ( ) ( )
We have
, -
, ( )-

∫ ( )

∫ ( )

we can also find ( )


( ) , -
, ( ) ( )-
( ) ( ) , ( ) ( )-

Let then

21 Solution (1) Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

( ) , ( ) ( )-

( ) , ( )-

( )

( )

As we see, both conditions are satisfied, thus ( ) is WSS.


Problem 4 (Solution)
(a) ( ) .
(b) ( ) *, ( )- + * , ( )-+
Since ( ) *, ( )- +

( ) ( ) * ( )+
( )
| |

(c) ( ) , ( ) ( )-
( ) ( ) ( ) ( )
We know that ( ) ( ) . By using this,

( ) ( ( ) ( )) ( ( ) ( )). Therefore,

( ) ( ) ( ( ) ( ) ( ))

Since, ( ) | |
and ( ) ( )

Therefore, ( ) ( )

22 Solution (1) Dr. Hussein Eledum


Markov Chains

Lesson 5: Discrete -Time Markov Chains


Higher Transition probability matrix and
Lesson 6:
Probability Distributions
Stationary Distribution and Regular
Lesson 7:
Markov Chain
Lesson 8: Classification of States
University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Lesson 5: Discrete -Time Markov Chains

Basic Definitions
Let * + be a stochastic process taking values in a state space S that has N states. To
understand the behavior of this process we will need to calculate probabilities like:
* + ( )
This can be computed by multiplying conditional probabilities as follows:
{ ) ( ) ( )
( )
Example A .
We randomly select playing cards from an ordinary deck.
The state space is * +. Let's calculate the chance of observing the sequence
using two different sampling methods.
(a) Without replacement:
{ }
={ * ) ( ) ( )

(b) With replacement:


{ }
={ * ) ( ) ( )

Definition.
The process * + is called a Markov chain if for any n and any collection of states
we have:
( ) ( )
For a Markov chain, the future depends only on the current state and not on history.
Exercise.
In example A, calculate ( ) and confirm that only "with replacement" do
we get a Markov chain.

23 Lesson 5: Discrete – Time Markov Chains Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Discrete-Time Markov chains


A discrete time Markov chain * + with discrete state space * +
where this set may be finite or infinite. If then the Markov chain is said to be in
state at time (or the step). A discrete-time Markov chain * + is
characterized by:
( ) ( )
where ( ) are known as one-step transition probabilities.
If ( ) is independent of n, then the Markov chain is said to possess
stationary transition probabilities and the process is referred to as a homogeneous Markov
chain. Otherwise the process is known as a nonhomogeneous Markov chain. Note that the
concepts of a Markov chain's having stationary transition probabilities and being a
stationary random process should not be confused. The Markov process, in general, is not
stationary. we will assume that all our Markov chains are time homogeneous.
Definition.
A Markov chain * + is called time homogeneous if, for any ; we have:
( )
For some function , -.
Transition probability matrix
Often transition probabilities listed in matrix. The matrix called the State transition matrix
or transition probability matrix and is usually shown by .
Let * + be a homogeneous Markov chain with a discrete finite state space
S={0,1,2,……,m} then,
( )
Regardless of the value of n. A transition probability matrix of * + is defined by:

[ ] [ ] where,

24 Lesson 5: Discrete – Time Markov Chains Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

State Transition Diagram


A Markov Chain is usually shown by a state transition diagram. Consider a Markov Chain
with three possible states 1,2 and 3, and the following transition probabilities.

[ ]

Figure below shows the state transition diagram for this Markov Chain. In this diagram
there are three possible states 1,2 and 3, and the arrows from each state to other states show
the transition probabilities .When there is no arrow from stat to state , it means that

Example
Consider the Markov chain shown in Figure above. Find
(a) ( )
(b) ( )
(c) If we know ( ) , find ( )
(d) If we know ( ) , find ( )
Solution
(a) By definition ( )

(b) By definition ( )

(c) ( ) ( ) ( )

( ) ( )( )

(d) ( )
( ) ( ) ( )
( ) ( ) ( ) By Markov properties

( ) ( )( )( )

25 Lesson 5: Discrete – Time Markov Chains Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Example.
A man either drives his car or takes a train to work each day. Suppose he never takes the
train two days in a row, but if he drives to work, then the next day he is just as likely to
drive again as he is to take the train.
The state space of the system is * ( ) ( )+ . This stochastic process is a Markov
chain since the outcome on any day depends only on what happened the preceding day.
The transition matrix of the Markov chain is
t d
t 0 1
 1 1 
d 2 2
The first row of the matrix corresponds to the fact that he never takes the train two days in
a row and so he definitely will drive the day after he takes the train. The second row of the
matrix corresponds to the fact that the day after he drives he will drive or take the train
with equal probability.
Example.
Card colour with replacement.
B R
B  12 1

 1 2

2 
1
R 2

Example.
Three boys A, B and C are throwing a ball to each other. A always throws the ball to B and
B always throws the ball to C; but C is just as likely to throw the ball to B as to A. let
denote the nth person to be thrown the ball. The state space of the system is {A,B,C}. This is
a Markov chain since the person throwing the ball is not influenced by those who
previously had the ball. The transition matrix of the Markov chain is
A B C
A  0 1 0
 
B 0 0 1
 1 1 0
C 2 2 
The first row of the matrix corresponds to the fact that A always throws the ball to B. The
second row of the matrix corresponds to the fact that B always throws the ball to C. the last
row corresponds to the fact that C throws the ball to A or B with equal probability(and
does not throw it to himself )

26 Lesson 5: Discrete – Time Markov Chains Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Lesson 6: Higher Transition probability matrix and Probability


Distributions

n- Step Transition probability matrix


Consider a Markov chain * + if then with probability . That
is, is the probability of going from state to state in one step: . Now suppose that
we are interested in finding the probability of going from state to state in two steps, i.e.
( )
( )

We can find this probability by applying the law of total probability. In particular we argue
that can take one of the possible values in . Thus we can write
( )
( ) ∑ ( ) ( )
∑ ( ) ( ) (by Markov properties)

We conclude
( )
( ) ∑

That means, In order to get to state , we need to pass through some intermediate state .

Accordingly, we can define the two-step transition matrix as follows:


( ) ( ) ( )

( ) ( ) ( )
( )

( ) ( ) ( )
[ ]
Thus, we conclude that the two-step transition matrix can be obtained by squaring the state
transition matrix, i.e.,
( )

( )
Similarly,
( )
Generally, we can define the n-step transition probabilities as
( )
( )

That means, In order to get to state , we need to pass through some intermediate
states .

27 Lesson 6: Higher Transition Matrix Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

( )
and the n-step transition matrix, , as
( ) ( ) ( )

( ) ( ) ( )
( )

( ) ( ) ( )
[ ]
Similar to the case of two-step transition probabilities, we can show that
( )

More generally, let and be two positive integers and assume . In order to get to
state in ( ) steps, the chain will be at some intermediate state after steps. To
( )
obtain , we sum over all possible intermediate states:
( ) ( ) ( )
( ) ∑
The above equation is called the Chapman-Kolmogorov equation.
The probability distribution * +
Consider a Markov chain * + where * + Suppose that we
know the probability distribution of . More specifically, define the row vector as
( )
, ( ) ( ) ( )-
How can we obtain the probability distribution of ? We can use the law of
total probability. More specifically, for any , we can write

( ) ∑ ( ) ( )

∑ ( )

If we generally define
( )
, ( ) ( ) ( )-
we can rewrite the above result in the form of matrix multiplication
( ) ( )

where is the state transition matrix. Similarly, we can write


( ) ( )

More generally, we can write


( ) ( )

( ) ( )

28 Lesson 6: Higher Transition Matrix Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Example.
Consider the Markov chain for the example of the man who either drives his car or takes a
train to work.
t d
t 0 1 
1 1
d 2 2
Here t is the state of taking a train to work and d of driving to work
⁄ ⁄ ⁄ ⁄ ⁄ ⁄

⁄ ⁄ ⁄ ⁄ ⁄ ⁄
[ ][ ] [ ]
Thus the probability that the process changes from, say, state t to state d in exactly 4 steps
( ) ( ) ( ) ( )
is , i.e. . similarly, , and .

Suppose that on the first day of work, the man toss affair die and drove to work if only if a
( ) , - is the initial probability distribution
6 is appeared. In other words,
then,
⁄ ⁄
( ) ( )
, - , -
⁄ ⁄
[ ]
( ) ( )
Is the probability distribution after 4 days, i.e. and .

Example.
Consider a system that can be in one of two possible states, * +. In particular,
suppose that the transition matrix is given by
⁄ ⁄

⁄ ⁄
[ ]
Suppose that the system is in state 0 at time , i.e., . Find the probability that
the system is in state 1 at time 3.
Solution:
( )
Here we know , ( ) ( )- , -
Thus, the probability that the system is in state 1 at time 3 is ⁄ .

29 Lesson 6: Higher Transition Matrix Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

⁄ ⁄
( ) ( )
, - [ ⁄ ⁄ ]
⁄ ⁄
[ ]
Thus, the probability that the system is in state 1 at time 3 is .

Example.
Consider the Markov chain for the example of the three boys A, B and C who are throwing
a ball to each other .
A B C
A 0 1 0
B 0 0 1
 
C  12 12 0
( ) , - is the initial probability
Suppose C was the first person with ball, i.e.
distribution then,

( ) ( )
, -[ ] [ ⁄ ⁄ ]

( ) ( )
[ ⁄ ⁄ ][ ] [ ⁄ ⁄ ]

( ) ( )
[ ⁄ ⁄ ][ ] [ ⁄ ⁄ ⁄ ]

Thus, after 3 throws, the probability that A has the ball is 1/4 , that B has the ball is 1/4 and
( ) ( ) ( )
that C has a ball is 1/2 . , and

Example. A school contains 200 boys and 150 girls. One student is selected after another
to take an eye examination.
Explain whether this process is a Markov Chain or not and why?
Solution
The state space of the stochastic process is * ( ) ( )+ . However, this
process is not a Markov chain since, for example the probability that the third person is a
girl depends not only on the outcome of the second trial but on both the first and second
trials.

30 Lesson 6: Higher Transition Matrix Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Lesson 7: Stationary Distribution and Regular Markov Chain

Stationary Distribution
Let P be the transition probability matrix of a Markov chain * +. If there exists a
probability vector ̂ such that:
̂ ̂ ( )
then ̂ is called a stationary distribution for the Markov chain.
Example.
Find stationary distribution ̂ for the transition matrix P of a Markov chain:

[ ⁄ ⁄ ]

We seek probability vector with two components, which we can denote by


̂ , - such that ̂ ̂:

, -[ ⁄ ⁄ ] , -

Multiplying the left side of the above matrix equation, we obtain

, -=, - or , or

Thus ̂ , - 0 1.

Thus in the long run, the man will take the train to work of the time, and drive to work

the other of the time.

Example
Find stationary distribution ̂ for the transition matrix P of a Markov chain:

[ ]

Suppose that the vector is ̂ , -


Solution

̂ 0 1

Thus in the long run, A will be thrown the ball 20% of the time, and B and C 40% of the
time.

31 Lesson 7: Stationary Distribution and Regular Markov chain Dr. Hussein Eledum
University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Regular Markov chain


A Markov chain is called regular if there is a finite positive integer m such that after m
time-steps, every state has a nonzero chance of being occupied, no matter what the initial
state. Let denote that every element of A satisfies the condition . Then,
for a regular Markov chain with transition probability matrix P, there exists an such
that .
Example.
The following transition matrix P of a Markov chain:

[ ⁄ ⁄ ]

Is regular matrix since,


⁄ ⁄
[ ⁄ ⁄ ][ ⁄ ⁄ ]
⁄ ⁄
[ ]
Is positive in every entry
Example.
The following transition matrix P of a Markov chain:

[ ⁄ ⁄ ]

Is irregular matrix since,

* +, * +, * +

every power m will have 1 and 0 in the first row.


Stationary distribution of regular Markov chain
Let * + be a regular finite-state Markov chain with transition matrix . Then
̂

Where ̂ is a matrix whose rows are identical and equal to the stationary distribution ̂ for
the Markov chain defined by Eq. (1). In other words, approaches ̂ means that each
entry of approaches the corresponding entry of ̂ , and ̂ approaches ̂ means that
each component of ̂ approaches the corresponding responding component of ̂.

32 Lesson 7: Stationary Distribution and Regular Markov chain Dr. Hussein Eledum
University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Example. For the regular transition matrix P of a Markov chain below find the matrix ̂ :

[ ⁄ ⁄ ]

We found before that: ̂ 0 1. Thus: ̂ * + 0 1

We exhibit some of the powers of P to indicate the above result:

[ ] 0 1 [ ] 0 1

[ ] 0 1 [ ] 0 1

Theorem.
If a stochastic matrix has 1 on the main diagonal, the is not regular (unless is )
Calculating probability
The probabilities for a Markov chain are computed using the initial probabilities
( )
( ) and the transition probabilities
( )
( )

Example.
Consider a Markov chain with the following transition matrix:
0 1
0 3 1

P   14 4

5
16 6

1. To find the probability that the process follows a certain path, you multiply the
initial probability with conditional probabilities. For example, what is the chance
that the process begins with 01010?
( )
( )
( ) ( )

33 Lesson 7: Stationary Distribution and Regular Markov chain Dr. Hussein Eledum
University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

2. Find the chance that the process begins with 00000.


( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( )

If (as in many situations) we were interested in conditional probabilities, given that


( )
, we simply drop , that is,

( )

( )

Example.
( ) , - and
If we start in state zero, then
( ) ( )

, -[ ]

[ ]
, -
On the other hand, if we flip a coin to choose the starting position then,
( )
0 1 and
( ) ( )

[ ][ ]

[ ]
, -

34 Lesson 7: Stationary Distribution and Regular Markov chain Dr. Hussein Eledum
University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Lesson 8: Classification of States

Accessible States:
( )
State is said to be accessible from state if , written as .
Communicative states
Two states and are called communicative states written as , if they are accessible
from each other. In other words means and
Irreducible Markov Chain
A Markov chain is said to be irreducible if all states communicate with each other.
Communication is an equivalence relation. That means that
1. Every state communicates with itself
2. implies and
3. and together imply
- If the transition matrix is not irreducible, then it is not regular
- If the transition matrix is irreducible and at least one entry of the main diagonal is
nonzero, then it is regular.
Class structure
The accessibility relation divides states into classes. Within each class, all states
communicate to each other, but no pair of states in different classes communicates. The
chain is irreducible if there is only one class.
Example:
Consider a Markov Chain shown in figure below

2 3 5

1 4

Any state 1, 2, 3, 4 is accessible from any of the five states, but 5 is not accessible from 1,
2, 3, 4. So we have two classes: {1, 2, 3, 4}, and {5}. The chain is not irreducible.
Example
Consider the chain on states 1, 2, 3, determine whether it is irreducible or not?

35 Lesson 8: Classification of States Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

[ ]
Solution
There is only one class because 1 ↔ 2 and 2 ↔ 3, this is an irreducible Markov chain.
Other solution is that
2 3

[ ]
1

( )
Since all , then this chain is irreducible.
Example
Consider the chain on states 1, 2, 3, 4, and
2 3

1 4

[ ]
This chain has three classes {1, 2}, {3} and {4} and is hence not irreducible.
( )
and also some .
Example
Consider the Markov chain shown in Figure below. It is assumed that when there is an
arrow from state to state , then . Find the classes for this Markov chain.

Solution
There are 4 communicating classes in this Markov Chain. States 1, 2 and states 3, 4
communicate with each other, but they do not communicate with any other notes. State 5
does not communicate with any other states, so it by itself is a class. States 6,7 and 8
construct another class. Thus, the class are

36 Lesson 8: Classification of States Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Class 1= {state 1, state2}


Class 2= {state 3, state4}
Class 3= {state 5}
Class 4= {state 6, state7, state8}
Closed Set (class)
A set of states in a matrix chain is closed set if no state outside of is reachable from any
state in . In the above example the set {1,2} is closed.
Absorbing States:
State is said to be an absorbing state if ; that is, once state is reached, it is never
left. Thus state is absorbing if and only if the jth row of transition matrix P has a 1 on the
main diagonal and zeros everywhere else. (The main diagonal of an n-square matrix
( ) ( ) ). An absorbing state is a closed set containing only
one state.
Absorption Probabilities:
Consider a Markov chain * + with finite state space ( ) and
transition probability matrix . Let ( ) be the set of absorbing states and
* ) be a set of non-absorbing states. Then can be expressed as
 1 0  0 0  0 
 
 0 1  0 0  0 
        
   I O
P 0   1 0  0    
p   R Q 
 m1,1   pm1,m pm1,m1  pm1, N 
        
 
 p N ,1   p N ,m   p N , N 

where I is an identity matrix, O is an zero matrix, and:

[ ] [ ]

Note that the elements of are the one-step transition probabilities from non-absorbing to
absorbing states, and the elements of are the one-step transition probabilities among the
non-absorbing states.
Example.
Suppose the following matrix is a transition matrix of a Markov chain.

37 Lesson 8: Classification of States Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

a1 a2 a3 a4 a5
a1  1
0 1 1 1

The state and are each absorbing, since each  4 4 4 4

a2  0 1 0 0 0
of the second and fifth rows has a 1 on the main a3  12 0 14 1
0
 4 
a4  0 1 0 0 0
diagonal.
a5  0 0 0 0

1

Example. (Random walk with absorbing barriers)


A man is at in integral point on the x-axis between the original O and, say, the 4. He takes
a unit step to the right with probability or to the left with probability , we
assume that the man remains at either endpoint whenever he reaches there.
We call this process a random walk with absorbing barriers, a0 a1 a2 a3 a4
( ) a0  1 0 0 0 0
since the state and are each absorbing. In this case  
a1  q 0 p 0 0
denotes the probability that the man reaches the state on a2  0 q 0 p 0
 
or before the nth step. Similarly,
( )
denotes the probability a3  0 0 q 0 p
a4  0 0 0 0 1 
that he reaches the state on or before the nth step.
Example.
A man tosses a fair coin until 3 heads occur. Let if, at the nth trail, the last tail
occurred at the ( )-th trial, i.e. denotes the longest string of heads ending at the nth
trial. This is a Markov chain process with state space ( ) , where
means the string of heads has length . the transition matrix is
a0 a1 a 2 a3
a0  1 1
0 0
 2 2

a1  1
2 0 1
2 0
a2  0 1
0 1
 2 2
a3  0 0 0 1 

Each row, except the last, corresponds to the fact that a string of heads is either broken if a
tail occurs or is extended by one if a head occurs. The last line corresponds to the fact that
the game ends if 3 heads are tossed in a row. Note that is an absorbing state.
Recurrent and Transient States
A state is said to be recurrent if, any time that we leave that state, we will return to that
state in the future with probability one. On the other hand, if the probability of returning is
less than one, the state is called transient.
Definition.

38 Lesson 8: Classification of States Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

State is transient state if there exists state that is reachable from , but the state is not
reachable from state j. If state is not transient, it is recurrent state.

39 Lesson 8: Classification of States Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Example. Consider the following Markov chain

Transient state: 2, 3,4 Recurrent state:1,5


For example we can go from 3 to 2 then 2 to 1 then we get trap in state 1 and we will never
come back to state 3 a gain.
Note that:
1. If two states are in the same class, either both of them are recurrent, or both of them
are transient.
2. A class is said to be recurrent if the states in that class are recurrent. If, on the other
hand, the states are transient, the class is called transient.
3. A Markov chain might consist of several transient classes as well as several
recurrent classes.
Definition (Alternative)
A set (class) C is called irreducible if whenever communicates with .
A set (class) C is closed if it is impossible to get out.
If C is a finite closed and irreducible set, then all states in C are recurrent.
If state A goes to state B but state B doesn't go to state A then state A is transient.
Example
Consider the Markov chain shown in Figure below. Identify the transient and recurrent
states, and the irreducible closed classes.

2 is transient 3 is transient
Class=* + is closed and irreducible are recurrent
Example.
Consider the chain on states 1, 2, 3, 4, and

[ ]

40 Lesson 8: Classification of States Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Obviously, state 4 is recurrent, as it is an absorbing state . The only possibility to

return to 3 is to do so in one step, so we have and 3 is transient.( ).

Class=* + is closed and irreducible are recurrent.


Periodic and Aperiodic States
A state is periodic with period , if is the smallest number such that all paths
leading from state back to state have a length that is a multiple of .
 Absorbing states are aperiodic
 If we can return to a recurrent state at irregular times, it is aperiodic
Example
Consider a Markov chain with the following state diagram

All state are periodic Period k =3


For example starting from state 1 we need 3, 6, or 9 steps to come back to state 1the
multiple is 3.
Example
Consider a Markov chain with the following state diagram

States 1 and 5 are aperiodic because they are absorbing states


The transient states 2, 3 and 4 are aperiodic we may not come back to this state again.
Note that the class {2,3,4} is not closed.
Ergodic Markov Chain
If all states in a Markov chain are recurrent (not transient), aperiodic (not periodic) and
communicate with each other, the chain is said to be ergodic.

41 Lesson 8: Classification of States Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Example
Consider a Markov chain with the following state diagram

Not ergodic
Example
Consider a Markov chain with the following state diagram

Is ergodic because all state are recurrent and communicate with other and it is aperiodic
because (k=0) there is no period every state can communicate with other state.

42 Lesson 8: Classification of States Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Solved Problems (2)

Problem 1: (Bernoulli Process)


Let be independent Bernoulli r.v.'s with ( ) and ( )
The collection of r.v.'s { + is a random process, and it called a Bernoulli process.
(a) Describe the Bernoulli process.
(b) Construct a typical sample sequence of the Bernoulli process.
(c) Find the transition matrix of this process considering a sequence of coin flips, where each
flip has probability of of having the same outcome as the previous coin flip, regardless of all
previous flips
Problem 2: (Binomial Process)
Let be independent Bernoulli r.v.'s with ( ) and ( )
for all n . Let be number of success in n trial of Bernoulli, then the stochastic process
* + is called Binomial process and it denoted by
0, n  0,
Sn  
 X 1  X 2  ...  X n , n  1.

(a) Describe the Binomial process.


(b) Find the transition matrix of this process
Problem 3:( Simple Random Walk Process)
Let be independent identically distributed r.v.'s with ( ) and
( ) for all n . let ∑
and . The collection of r.v.'s { + is a random process, and it is called the
simple random walk process in one dimension.
(a) Describe the simple random walk process.
(b) Construct a typical sample sequence of the process.
(c) Find the transition matrix of this process
Problem 4:
Student's study habits are as follows. If he studies one night, he is 70% sure not to study
the next night. On the other hand, if he does not study one night, he is 60% sure not to
study the next night as well. Find
(a) The transition matrix of this process. (b) The transition matrix after 4 nights.
(c) Suppose that on the first night, the student toss affair die and studied if only if a 2 or 3 are
appeared. What is the probability that he didn't study in the fourth night.

43 Solved Problems 2 Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Problem 5:
Consider a Markov Chain with three possible states * +, that has the following
transition matrix.

[ ]

(a) Draw the state transition diagram for this chain.


(b) ( )
(c) ( )
(d) If we know ( ) , find ( )
(e) If we know ( ) , find ( )
(f) If we know ( ) ( ) , find ( )
Problem 6:
A psychologist makes the following assumptions concerning the behavior of mice
subjected to a particular feeding schedule. For any particular trail 80% of the mice that
went right on the previous experiment will go right on this trial, and 60% of those mice
that went left on the previous experiment will go right on this trial. If 50% went right on
the first trial, what would he predict for:
(a) The second trial. (b) The third trial. (c) The thousandth trial.
Problem 7:
Consider a Markov chain which has the transition matrix

 0.7 0.2 0.1


 
P   0 0.6 0.4 
 0.5 0 0.5 
 
Determine

(a) ( )
(b) 2. ( )
(c) ( )
Problem 8
Determine whether each of the following is stochastic matrix or not and why given that * +?

(a) [ ] (b) * + (c) * +

44 Solved Problems 2 Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Problem 9
Given the transition matrix
( )
[ ] and 0 1.
Find:
( ) ( ) ( )
(a) (b) (c)
Problem 10
Given the transition matrix

( )
[ ] and 0 1.

Find:
( ) ( ) ( ) ( )
(a) (b) (c) (d)
Problem 11
A salesman's area consists of three cities, A, B and C. He never sells in the same city on
successive days. If he sells in city A, then the next day he sells in city B. however, if he
sells in either B or C, then the next day he is twice as likely to sell in city A as in the other
city. Find
(a) The transition matrix of this process.
(b) In the long run, how often does he sell in each of the cities.
Problem 12
There are 2 white balls in urn A and 3 red balls in urn B. at each step of the process a ball
is selected from each urn and the two balls selected are interchanged. Let the state ai of the
process be the number i of red balls in urn A. find:
(a) The transition matrix of this process.
(b) What is the probability that there are 2 red balls in urn A after 3 steps.
(c) In the long run, what is the probability that there are 2 red balls in urn A.
Problem 13
Consider the transition matrix P of a Markov chain of S={0,1} .

0 1

Given that ( ) ( ) .
(a) Find the distribution of
(b) Find the distribution of , when .

45 Solved Problems 2 Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Problem 14
Consider a Markov chain of the transition matrix

0 1 with * +.

Compute,
(a) ( ) (b) ( )
(c) ( )
Problem 15
Determine whether each of the given matrices is recurrence or not

[ ]

Problem 16
Consider the two Markov chains shown in Figure below. Identify the transient and
recurrent states, and the irreducible closed classes in each one.

(a) (b)

Problem 17
Consider the Markov chains shown in Figure below. Identify the transient, recurrent,
periodic and aperiodic states.

Problem 18
Determine whether the following matrix is ergodic or not and Why?

Problem 19. Determine whether each of the following matrix is regular or not and why?

(a) 0 1 (b) 0 1 (c) [ ]

46 Solved Problems 2 Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Solutions (2)

Problem 1: (Solution)
(a) The Bernoulli process { + is a discrete-parameter, discrete-state process. The
state space is S={0,1}, and the parameter set is N={1,2,3,…….}.
(b) A sample sequence of the Bernoulli process can be obtained by tossing a coin
consecutively. If a head appears, we assign 1, and if a tail appears, we assign 0.
n 1 2 3 4 5 6 7 8 9 10 ……
Coin tossing H T T H H H T H H T ……
1 0 0 1 1 1 0 1 1 0 ……

The sample sequence { } obtained above is plotted in


X

0 2 4 6 8 10 n

(c) Transition matrix

0 1
{ 0  p 1 p
P  
1 1  p p 
Problem 2: (Solution)
(a) The Binomial process { + is a discrete-parameter, discrete-state process. The state
space * +, and the parameter set is * +.
(b) Transition matrix

 p, j  i  1 (i  0,1,...)

PSn1  j | Sn  i, Sn1  in1 ,..., S1  i1   pij  1  p, j  i (i  0,1,...)
0,
 otherwise

( )

47 Solutions 2 Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

The first row of the matrix corresponds to the fact that in the next trial still have 1 success that
means failed with probability , get 2 successes means succeeded with probability . Moving
from 1 to 3,4,…. That impossible.
Problem 3: (Solution)
(a) The simple random walk process { + is a discrete-parameter, discrete-state
process. The state space is S={…..,-2,-1,0,1,2,…..}, and the parameter set is
T={0,1,2,3,…….}.
(b) A sample sequence of the simple random walk process can be obtained by tossing a coin
every second and letting increase by unity if a head appears, and decrease by unity if a
tail appears. Thus, for instance,
n 0 1 2 3 4 5 6 7 8 9 10 …
Coin tossing H T T H H H T H H T …
0 1 0 -1 0 1 2 1 2 3 2 …

The sample sequence { } obtained above is plotted in

(c) Find the transition matrix of this process

(d) ( ) {

( )
Suppose ( ) and ( ) 1/2

( ) { | |

48 Solutions 2 Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Problem 4: (Solution)
(a) The transition matrix of this process
S T
The state space is S={S"study",T "Not study"} S  0.3 0.7 
P   
T  0.4 0.6 
S T
(b) The transition matrix after 4 nights. S  0.36 0.64 
P 4   
T  0.36 0.64 
(c) The probability that he didn't study in the second night
( )
( ) is the initial probability distribution then,

( ) ( )
0 10 1 , -
( )
The probability that he didn't study in the second night is .

Problem 5: (Solution)

(b) By definition ( )

(c) By definition ( )

(d) ( ) ( ) ( ) ( ) ( )( )

(e) ( ) ( ) ( ) ( )
( ) ( ) ( ) By Markov properties
( ) ( )( )( )

(f) ( ) ( ) ( ) ( )

( ) ( ) ( )

( ) ( ) ( )

( )
( )( )( )

49 Solutions 2 Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Problem 6: (Solution)
The state space is S={R"Right",L "Left"} , then transition matrix of this process is,
R L
R  0.8 0.2  The probability distribution for the first trial ( ) , -.
P   
L  0.6 0.4 
(a) To compute the probability distribution for the next step, i.e. the second trial, multiply p by
the transition matrix P.

, -0 1 , -

Thus, on the second trial he predicts that 70% of the mice will go right and 30% will go
left.
(b) To compute the probability distribution for the third trial, multiply that of the second trial by

the transition matrix P. , -0 1 , -

Thus, on the third trial he predicts that 74% of the mice will go right and 26% will go left.
(c) We assume that the probability distribution for the thousandth trial is essentially the
stationary probability distribution of the Markov chain, and we compute it by the following

, -0 1 ( ) , -

Thus, on the thousandth trial he predicts that 75% of the mice will go right and 25% will go left.
Problem 7: (Solution)
(a) ( )
( )
(b) ( )

 0.478 0.264 0.250 


 
P   0.36 0.256 0.304 
3

 0.57 0.18 0.25 


 
( ) ( ) ( )
(c)

 0.54 0.26 0.2 


 
P   0.44 0.36 0.44 
2

 0.6 0.1 0.3 


 
Problem 8: (Solution)
(a) Stochastic because and ∑

(b) not stochastic because ∑


(c) not stochastic because

50 Solutions 2 Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Problem 9: (Solution)

( )
* + 0 1

( ) ( ) ( ) ( )
(a) 0 1* + 0 1 (b) (c)

Problem 10: (Solution)


( ) ( ) ( ) ( )
(a) (b) (c) 0 1 (d)

Problem 11: (Solution)

(a) The state space is S={A,B,C} [ ]

(b) ̂ 0 1 , -

Thus in the long run he sell 40% of the time in city A, 45% of the time in city B, and
15% of the time in city C.
Problem 12: (Solution)
(a) There are 3 states describe by the following diagrams:
2W 3R 1W 1W 2R 2W
1R 2R 1R
A B A B A B

The transition matrix is:


a0 a1 a2
a0  0 1 0
1 1 1
a1 6 2 3
0 2 1 
a2  3 3

For example, if the process is in state then a white ball must be selected from urn A and
a red ball from urn B, so the process must move to state . Accordingly, the first row of
the transition matrix is , -. To move from to a red ball must be selected

from urn A and a white ball from urn B, with prob. ( ( ) ( ) ).

Problem 13: (Solution)


( )
Since the initial distribution , -
( ) ( )
(a) the distribution of is given by: , -0 1

51 Solutions 2 Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

(b) since P is regular so


̂ and each row in ̂ is ̂ so ̂ ̂

, -0 1 ( ) , -

distribution of , when is , -
Problem 14: (Solution)
( ) ( ) ( )
(a) ( )

(b) ( )

(c) ( )

0 1

Problem 15: (Solution)

[ ]

All hence, is irreducible .


Problem 16: (Solution)
3 is transient 1 is transient
Class=* + is closed and irreducible 3 is transient
are recurrent. 5 is transient
(a) Class=* + is closed and irreducible (b) Class=* + is closed and irreducible
are recurrent.
are recurrent.
Class=* + is closed and irreducible
are recurrent.
Problem 17: (Solution)
Transient: 1, 2 and 3
Recurrent: 4, 5 and 6
Aperiodic : transient states 1,2 and 3
Periodic : 4, 5 and 6 with a period .
Problem 18: (Solution)
A Markov chain is ergodic because state 4 and 5 are recurrent and with aperiodic period
. Transient: 1, 2 and 3
Problem 19: (Solution)

52 Solutions 2 Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

(a) yes, all entries are positive


(b) yes because has only positive entries.

0 1

(c) No, because it is not irreducible (not connectable). Also, if you multiply it by itself over
and over it will still contain zeros

53 Solutions 2 Dr. Hussein Eledum


Poisson Processes

Lesson 9: Counting Process


Lesson 10: Poisson Process
University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Lesson 9: Counting Process


Definition: ( Points of Occurrence)
Let represent a time variable. Suppose an experiment begins at . Events of a
particular kind occur randomly, the first at the second at , and so on. The random
variable denotes the time at which the ith event occurs, and the values of
( ) are called points of occurrence.
Definition: (An Interarrival process)
Let and . Then denotes the time between the ( ) and
the events. The sequence of ordered random variables * + called an
interarrival process. Figure below shows a possible realization and the corresponding
sample function of interarrival process.
Z1 Z2 Z3 Zn
0 T1 T2 T3 Tn-1 Tn t
Definition: (Renewal process)
If all random variables are independent and identically distributed, then * + is
called a renewal process or a recurrent process.
Definition: (Arrival process)
If where denotes the time from the beginning until the
occurrence of the nth event. Thus, * + is called an arrival process.
Counting process
In some problems, we count the occurrences of some types of events. In such scenarios, we
are dealing with a counting process. For example, you might have a random
process ( ) that shows the number of customers who arrive at a supermarket by
time starting from time 0. For such a processes, we usually assume ( ) , so as time
passes and customers arrive, ( ) takes positive integer values.
Definition
A ransom process * ( ) , )+ is a counting process if ( ) represents the total
number of “events” that have occurred in the interval ( ).
Examples: - Number of persons entering a store before time
- Number of people who were born by time
- Number of goals a soccer player scores by time .

53 Lesson 9: Counting Process Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

( ) should satisfy:
1. ( ) and ( )
2. ( ) is integer valued that is, ( ) * + for all , )
3. If , then ( ) ( )
4. For , ( ) ( ) equals the number of events that have occurred on the
interval ( )
Since counting processes have been used to model arrivals (such as the supermarket
example above), we usually refer to the occurrence of each event as an "arrival". For
example, if ( )is the number of accidents in a city up to time , we still refer to each
accident as an arrival. Figure below shows a possible realization and the corresponding
sample function of a counting process ( ).

By the above definition, the only sources of randomness are the arrival times
Definition: (Independent Increment)
Let * ( ) , )+ be a continuous-time random process, we say that ( ) has
independent increment if, for all the random variables
( ) ( ) ( ) ( ) ( ) ( )
are independent.
Note that for a counting process, ( ) ( ) is the number of arrivals in the
interval ( -.Thus, a counting process has independent increments if the numbers of
arrivals in non-overlapping (disjoint) intervals
( -( - ( -.
are independent.

54 Lesson 9: Counting Process Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

A counting process has independent increments if the numbers of arrivals in non-


overlapping (disjoint) intervals are independent.
Having independent increments simplifies analysis of a counting process. For example,
suppose that we would like to find the probability of having 2 arrivals in the interval (1,2],
and 3 arrivals in the interval (3,5]. Since the two intervals (1,2] and (3,5] are disjoint, we
can write
( ( - ( -)
( ( -) ( ( -)
Definition: (Stationary Increment)
Let * ( ) , )+ be a continuous-time random process, we say that ( ) has
stationary increment if, for all and all the two random variables
( ) ( ) and ( ) ( ) have the same distributions. In other words, the
distribution of the difference depends only on the length of the interval ( -, and not on
the exact location of the interval on the real line.
A counting process has stationary increments if, for all , ( ) ( ) has
the same distribution as ( ).

55 Lesson 9: Counting Process Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Lesson 10: Poisson Process


One of the most important types of counting processes is the Poisson process (or Poisson
counting process), it is usually used in scenarios where we are counting the occurrences of
a certain events that appear to happen at a certain rate, but completely at random (without a
certain structure). For example, suppose that from historical data, we know that
earthquakes occur in a certain area with a rate of 2 per month. Other than this information,
the timings of earthquakes seem to be completely random.
Further examples - The number of car accidents at a site or in an area.
- The location of users in a wireless network.
- The requests for individual documents on a web server.
- The outbreak of wars.
Poisson Random Variable
A discrete random variable is said to be a Poisson random variable with parameter ,
shown as ( ), if its range is * +, and its pmf is given by

( ) {

- if ( ), then , - and , -
- if ( ), for and the are independent, then
( )
Definition (Poisson Process)
The counting process * ( ) , )+ is said to be a Poisson process having rate
(intensity) ( ) if:
1. ( )
2. ( ) has independent increments.
3. The number of arrivals in any interval of length has ( )
distribution with mean .

It follows from condition 3 that a Poisson process has stationary increments and that
, - and , -

56 Lesson 10: Poisson Process Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

We conclude that in a Poisson process, the distribution of the number of arrivals in any
interval depends only on the length of the interval and not on the exact location of the
interval on the real line.
Result. Let ( ) be the probability that exactly n events occur in an interval of length ,
namely,
( ) ( ( ) ). We have, for each , )
( )
( )
Example.

The number of customers arriving at a grocery store can be modelled by a Poisson process
with intensity λ=10 customers per hour.
1. Find the probability that there are 2 customers between 10:00 and 10:20.
2. Find the probability that there are 3 customers between 10:00 and 10:20
and 7 customers between 10:20 and 11.
Solution

1. Here λ=10 and the interval between 10:00 and 10:20 has length hours. Thus,

if is the number of arrivals in that interval, we can write ( ).

Therefore,

( )
( )

2. Here we have non-overlapping interval =(10:00 , 10:20] and =(10:20 , 11:00].


Thus, we can write
( )
( ) ( )
Since the lengths of the two intervals and respectively, we obtain
and . Thus, we have

( ) ( )
( )
( )( )

57 Lesson 10: Poisson Process Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Example.
Suppose the process * ( ) , )+ be a Poisson process having rate . Find
* ( ) ( ) ( ) +.
Solution:

We have, * ( ) ( ) ( ) ( ) ( ) +.
From independent increments properties we notice that the r.v.'s
( ) ( ) ( ) ( ) ( ) are independents, according to stationary
properties the r.v.'s follow Poisson distribution with the parameters
( ) ( ) respectively. Therefore,
* ( ) ( ) ( ) +
( ) ( ) ( )

Second definition for Poisson process:


Let ( ) be a Poisson process with rate . Consider a very short interval of length Δ. Then,
the number of arrivals in this interval has the same distribution as ( ). In particular, we
can write
( )
( ( ) )

(Taylor Series)

Note that if Δ is small, the terms that include second or higher powers of Δ are negligible
compared to Δ. We write this as
( ( ) ) ( )
( ( ) ) is the probability that no event occurs in the interval .
Where ( ) is a function of which goes to zero faster than does ; that is,
( )

58 Lesson 10: Poisson Process Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

The letter of Omicron ( ) was originally used in mathematics as a symbol for Big O notation,
representing the asymptotic rate of growth of a function.
Now, let us look at the probability of having one arrival in an interval of length Δ.
( )
( ( ) )

( ) (Taylor Series)

. /

( )
We conclude that
( ( ) ) ( )
Similarly,
( ( ) ) ( )

Definition
The counting process * ( ) , )+ is said to be a Poisson process having rate
(intensity) ( ) if:
1. ( )
2. ( ) has independent and stationary increments.
3. We have
( ( ) ) ( )
( ( ) ) ( )
( ( ) ) ( )
Distribution of Interarrival times
Exponential Distribution
It is often used to model the time elapsed between events. A continuous random
variable is said to have an exponential distribution with parameter , shown as
X ( ), if its probability density function is of the form

( ) {

An CDF is given as
( ) ( ) { implies that ( )
- if ( ), then , - and , -

59 Lesson 10: Poisson Process Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Connection between a Poisson process and the exponential distributions


There is actually a strong connection between a Poisson process and the exponential
distribution.
Let ( ) be a Poisson process with rate . Let be the time of the first arrival. Then,
( )
( ) ( ( -)

We conclude
( ) ( )
Therefore, ( ). Let be the time elapsed between the first and the
second arrival.
Z1 Z2 Z3 Zn
0 T1 T2 T3 Tn-1 Tn t
Figure: The random variables are called the interarrival times of the counting
process ( ).

Let and . Note that the two intervals ( - and , - are independent. We
can write
( ) ( , - )
( , -) ( )
We conclude that ( ), and that and are independent. The random
variables are called the interarrival times of the counting process ( ).
Similarly, we can argue that all are independent and ( ) for

Interarrival time for Poisson process:


Let ( ) be a Poisson process with rate λ. Then the interarrival time are
independent and
( )
Remember that if is exponential with parameter , then is a memoryless random
variable, that is
( ) ( )
Thinking of the Poisson process, the memoryless property of the interarrival times is
consistent with the independent increment property of the Poisson distribution. In some
sense, both are implying that the number of arrivals in non-overlapping intervals are
independent.

60 Lesson 10: Poisson Process Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Example
Let ( ) be a Poisson process with intensity , and let be the corresponding
interarrival times.
1. Find the probability that the first arrival occurs after , i.e., ( )
2. Given that we have had no arrivals before , find ( ).
3. Given that the third arrival occurred at time , find the probability that the
fourth arrival occurs after .
Solution
1. ( ), we can write
( )
( )
another to solve this is to note that
( )
( ) ( ( -)
2. we can write
( ) ( ).. (memoryless property)
( )

another to solve this is to note that the number of arrivals in (1,3] is independent of the
arrivals before . Thus
( ) ( ( - ( -)
( )
( ( -)
3. the time between the third and the fourth arrival is ( ) we can write
( ) ( ) (independent of the Z's )
( )

61 Lesson 10: Poisson Process Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Distribution of arrival times


Gamma Distribution
A continuous random variable is said to have a gamma distribution with parameters
, and shown as ( ), if its probability density function is of the form

( ) {

- if ( ), then , - and , -
- if we let we get ( )
- ( )
We know that If where denotes the time from the beginning
until the occurrence of the nth event. Thus, * + is called an arrival process. So
is the sum of independent ( ) random variables.
Theorem. If , where the 's are independent ( ) random
variables, then ( ).
The gamma distribution also called Erlang distribution, i.e, we can write
( ) ( )
The PDF of is given by

( ) { ( )

Arrival time for Poisson process:


Let ( ) be a Poisson process with rate λ. Then the interarrival time have
( ) distribution. In particular , we have

( ) ( )

62 Lesson 10: Poisson Process Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Solved Problems (3)

Problem 1
Suppose we know that a receptionist receives an average of 15 phone calls per hour.
a) What is the probability that he will receive at least two calls between 8 and 8:12 am.
b) If the receptionist absents for 10 minutes what is the probability that no call has
been lost.
Problem 2
Consider the failures of a link in a communication network. Failures occur according to a
Poisson process with rate 2.4 per day. Find:
(i) Probability of time between failure greater than
(ii) Probability of time between failure less than
(iii) Probability of failures in
(iv) Probability of 0 failures in next day.
Problem 3
Damages occur in a connection wire under the ground follow Poisson process at rate of
per mile.
a) What is the probability that no damages in the first 2 miles.
b) In condition of no damages in the first 2 miles, what is the probability that no
damages between the 2nd and 3rd miles.
Problem 4
Suppose the process * + be a Poisson process having rate . Find:
1. * +.
2. * +.

63 Solved Problem 3 Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Solutions (3)

Problem 1: (Solution)
(a) Suppose X is a random variable associated to the number of calls received between

8 and 8:12 am, hence, X is follows Poisson distribution with mean then,
* + ( * + * +)
( )
(b) Suppose Y is a random variable associated to the number of calls received within 10
minutes, hence, Y is follows Poisson distribution with mean then the

probability that no calls have been received within 10 minutes is,


( )
* +

Problem 2: (Solution)
(i) ( )
(ii) ( )
( )
(iii) ( )

(iv) ( )
Problem 3: (Solution)
Suppose ( ) a number of damages occur until mile t . then,
a) The random variable ( ) follows Poisson distribution with rate parameter
,hence
* ( ) +
b) Since the two random variables ( ) ( ) and ( ) ( ) are independent,
therefore, the conditional probability and unconditional probability are
equivalent,then
* ( ) ( ) +

Problem 4: (Solution)
1. * + * +

equivalent to

, ( ) ( ) ( ) - , ( ) ( ) ( ) ( ) ( ) -

64 Solutions 3 Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

( ) ( ) ( )

( ) ( ) ( )

2. * +
* +
* +
( ) ( ) ( )

( ) ( ) ( )

65 Solutions 3 Dr. Hussein Eledum


Branching Processes

Lesson 11: Simple Branching Process


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Lesson 11: Branching Processes

Consider some sort of population consisting of reproducing individuals.


Examples:
Living things (animals, plants, bacteria, royal families); diseases; computer viruses;
rumours, gossip, lies (one lie always leads to another!)
Start conditions: start at time , with a single individual.
Each individual: lives for 1 unit of time. At time , it produces a family of
Offspring, and immediately dies.
How many offspring? Could be 0, 1, 2, . . . . This is the family size, . ( stands for number
of Young).
Each offspring: lives for 1 unit of time. At time n = 2, it produces its own family
of offspring, and immediately dies. and so on. . .
Assumptions
1. All individuals reproduce independently of each other.
2. The family sizes of different individuals are independent, identically distributed random
variables. Denote the family size by (number of Young).
Family size distribution, ( )

Definition:
A branching process is defined as follows.
- Single individual at time .
- Every individual lives exactly one unit of time, then produces offspring and dies.
- The number of offspring takes values 0, 1, 2, . . . , and the probability of
producing offspring is ( ) .
- All individuals reproduce independently. Individuals have family sizes
, where each has the same distribution as .
- Let be the number of individuals born at time , for Interpret
as the size of generation .
- Then the branching process is * + * +.

66 Branching Processes Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Definition:
The state of the branching process at time is , where each can take values
Note that always. represents the size of the population at time .
Branching Process

Analysing the Branching Process


as a randomly stopped sum:
Consider the following
The population size at time is given by .
Label the individuals at time as .
Each individual starts a new branching process. Let be the
random family sizes of the individuals .
The number of individuals at time , is equal to the total number of offspring of the
individuals . That is,

Thus is a randomly stopped sum: a sum of , randomly stopped by the random


variable .
Note:
1. Each : that is, each individual has the same family size
distribution.
2. are independent.
Probability Generating Function of
Theorem. Let ( ) ( ) ∑ be the PGF of the family size distribution Y.
Let (start from a single individual at time 0), and let be the population size at
time ( ) Let ( ) be the PGF of the random variable . Then

( ) ( ( . ( ( ( ) ))/))

67 Branching Processes Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Proof. Let ( ) ( ) be the probability generating function of Y .


(Recall that Y is the number of Young of an individual: the family size.)
Now is a randomly stopped sum: it is the sum of stopped by the random
variable . So we can use to Theorem below to express the PGF of directly in terms
of the PGFs of Y and .
Theorem: if , and is itself random, then the PGF of
is given by:
( ) ( ( )) ( )
where is the PGF of the random variable .
For ease of notation, we can write:
( ) ( ), ( ) ( ), and so on.
Note that (the number of individuals born at time ), so we can also write:
( ) ( ) ( ) (for simplicity). Thus, from (*),
( ) ( ( ))
Note:
1. ( ) ( ) the PGF of the population size at time .
2. ( ) ( ) the PGF of the population size at time .
3. ( ) ( ) ( ) the PGF of the family size .
We are trying to find the PGF of , the population size at time .
So far, we have:
( ) ( ( )) ( )
But by the same argument,
( ) ( ( ))
(use instead of to avoid confusion in the next line.)
Substituting in (**),
( ) ( ( ))
( ) where ( )
( ( ))
( ( ( ))) replacing ( )
By the same reasoning, we will obtain:

( ) ⏟ ( . ( ( ))/)

68 Branching Processes Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

and so on, until we finally get

( ) ⏟ ( ) ( . ( ( ( ) ))/)

⏟( . ( ( ( ) ))/)

( ( . ( ( ( ) ))/))

Mean of
Theorem. * + be a branching process with (start with a single
individual). Let denote the family size distribution, and suppose that E( ) . Then
( )
Proof. We know that is a randomly stopped sum:

( ) (∑ ) ( ) ( )
( ) ( ( ))
( )

( )

Example. Let family size ( ) . So ( ) .

Expected population size by generation is:


( ) ( )

Example. Let family size ( ) . So ( ) .

( ) ( )
Variance of
Theorem. * + be a branching process with (start with a single
individual). Let denote the family size distribution, and suppose that E( ) and
( ) Then

( ) ,
. /

69 Branching Processes Dr. Hussein Eledum


University of Tabuk – Faculty of Science – Dept. of Statistics - Stochastic Processes STAT 321 1438-39

Example. Let family size ( ) . So ( ) .

( )
( )
Since is:
( )
( ) . / ( ) ( ) . /

Example. Let family size ( ) . So


( )

Since is:
( )

70 Branching Processes Dr. Hussein Eledum


View publication stats

You might also like