0% found this document useful (0 votes)
102 views8 pages

(MTL106) Review Notes - Stochastic Processes (IITD)

Uploaded by

suhani05soni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
102 views8 pages

(MTL106) Review Notes - Stochastic Processes (IITD)

Uploaded by

suhani05soni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

MTL106 Stochastic Processes Review Notes

Viraj Agashe
December 2021

Contents
1 Stochastic Processes 2
1.1 Important Terminologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Types of Stochastic Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Simple Random Walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Markov Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Stationary Process 3
2.1 Terminologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Strict Sense Stationary Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3 Wide Sense Stationary Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

3 Discrete Time Markov Chain 3


3.1 DTMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3.2 Transition Probability Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.3 One Step Transition Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.4 State Transition Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.5 Chapman-Kolmogorov Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.6 Important Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.6.1 Communicating States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.6.2 Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.6.3 Periodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.6.4 Closed Set of States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.6.5 Irreducible Markov Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.6.6 First Visit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.6.7 Mean Recurrence Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.7 Classification of States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.8 Stationary and Limiting Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

4 Continuous Time Markov Chains 6


4.1 CTMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4.2 Stationary Transition Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4.3 Infinitesimal Generator Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4.4 Kolmogorov Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4.5 Limiting and Stationary Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

5 Poisson Process 7

1
6 Queueing Models 7
6.1 Kendall Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
6.2 Little’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
6.3 M/M/1 Queueing Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
6.4 M/M/c Queueing Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
6.5 M/M/1/N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1 Stochastic Processes
1.1 Important Terminologies
1. Stochastic Process: A collection of random variables {X(t), t ≥ 0} on a probability space is
called a stochastic process. It is a function of two arguments, X(ω, t).
2. Parameter Space: The set T to which t belongs.

3. State Space: The set of all possible values of X(t).


4. Sample Path: The plot of the state Xi against the parameter i.

1.2 Types of Stochastic Process


On the basis of nature of parameter space and state space we can classify stochastic processes:
1. Discrete Time, Discrete State: eg. No of packets waiting in the buffer after n seconds
2. Continuous Time, Discrete State: eg. No of customers in a store at any time

3. Discrete Time, Continuous State: eg. Water level in a dam on the n-th day
4. Continuous Time, Continuous State: eg. Temperature of a city at time t

1.3 Simple Random Walk


Let {Xi } be a stochastic processes defined on a probability space. As a special case let
(
p k=1
P (k) =
1 − p k = −1
Pn
Then the stochastic process defined by {Sn } where Sn = i=0 Xi is called a simple random walk.
When p = 21 the random walk is called a symmetric random walk. eg. A coin tossing game
between A and B. A gets 1 $ if the coin lands heads and gives 1 $ if it is tails.

1.4 Markov Process


A stochastic process satisfying the memoryless property,

P (Xk = Sk /Xk−1 = Sk−1 , ...X1 = S1 ) = P (Xk = Sk /Xk−1 = Sk−1 )

i.e. the k-th distribution depends only on the (k − 1)-th distribution, is called a Markov Process.

2
2 Stationary Process
2.1 Terminologies
1. Mean Function: The expectation function of a stochastic process.

m(t) = E (X(t))

2. Second Order Process: A stochastic processes with finite second order expectation, i.e. E(X 2 (t)) <
∞.
3. Covariance Function: Defined as

c(s, t) = cov(X(s), X(t))

It satisfies:
• c(s, t) = c(t, s)
• Using Schwarz inequality, c(s, t) ≤
p
c(s, s)c(t, t)

4. Autocorrelation Function: Defined as


c(s, t)
R(s, t) = p p
var(X(s)) var(X(t))

2.2 Strict Sense Stationary Process


If for arbitrary t1 , t2 , ...tn ∈ T the joint distribution (X(t1 ), X(t2 ), ...X(tn )) and (X(t1 + h), X(t2 +
h), ...X(tn + h)) are the same for all h > 0 then {X(t)} is a strict sense stationary process of order n.
If this is satisfied ∀n ∈ N then it is a strict sense stationary process.

2.3 Wide Sense Stationary Process


A wide sense stationary process is a stochastic process which satisfies:
1. m(t) = E(X(t)) is independent of t.
2. Process is 2nd order.

3. c(s, t) is a function of only |t − s|.


It is also called a weakly stationary or covariance stationary process. Note that a strict sense stationary
process is also a wide sense stationary process but converse is not true.

3 Discrete Time Markov Chain


3.1 DTMC
Consider a discrete time, discrete state stochastic process {Xn }. Suppose that

P (Xn+1 = j/X0 = i0 , ...Xn = i) = P (Xn+1 = j/Xn = i)

for all states i0 , i1 , ...i, j and n > 0 then the process is a DTMC.

3
3.2 Transition Probability Function
The transition probability function of Markov Chain is defined as,

Pjk (m, n) = P (Xn = k/Xm = j), 0 ≤ m ≤ n, j, k ∈ S

When the DTMC is time homogeneous, Pjk (m, n) depends only on |m − n|. So we define Pjk (n) =
P (Xm+n = k/Xm = j) to be the n-step transition probability function.

3.3 One Step Transition Probability


If we put n = 1 in the n-step transition probability function, we get Pjk (1) = Pjk = P (Xn+1 = k/Xn =
j). We can put Pjk in the form of a matrix P = [pij ] called the one-step transition probability matrix
or the Stochastic Matrix. This matrix satisfies,
• pij ≥ 0

P
j pij = 1, i ∈ S

• P k represents the k-step transition probability matrix.

3.4 State Transition Diagram


A directed graph with the directed weight between vertices i and j as the one step transition probability
Pij .

Figure 1: Example of a state transition diagram

3.5 Chapman-Kolmogorov Equations


The transition probability of state i to j in m + n steps is given by,
X
Pij (m + n) = Pik (m)Pkj (n)
k

3.6 Important Terminology


3.6.1 Communicating States
2 states are said to communicate if exist m, n such that Pij (n) > 0 and Pji (m) > 0. We denote this
as i ↔ j.

4
3.6.2 Class
A class of states is a subset of the state space S such that every state of the class communicates with
every other state of the class and no other state outside the class communicates with all states of the
class.

3.6.3 Periodicity
A state i is called a return state if Pii (n) > 0 for some n. The periodicity of the state is the GCD of
all m such that Pii (m) > 0.

3.6.4 Closed Set of States


If C is a set of states such that no state outside C can be reached from any state in C then it is a
closed set of states. If the cardinality of C is 1, then the state inside C is called an absorbing state.

3.6.5 Irreducible Markov Chain


A Markov Chain which does not contain any proper closed subset of the state space other that the
state space itself is irreducible. All states of an irreducible Markov chain have the same period.

3.6.6 First Visit


The probability of visiting the state k for the first time at the n-th step starting from j is called the
first visit and is denoted by fjk (n). Note that,
n
X
Pjk (n) = fjk (r)Pkk (n − r)
r=0

We denote by Fjk the probability that the Markov chain ever reaches the state k starting from j. It
is therefore,

X
Fjk = fjk (n)
n=1

3.6.7 Mean Recurrence Time


The mean number of steps needed to reach the state k starting from j is called the mean recurrence
time.
X∞
µjk = nfjk (n)
n=1

3.7 Classification of States


1. Recurrent State: A state is said to be recurrent or persistent if Fjj = 1. These are of two types:
• Null Recurrent: If µjj → ∞.
• Positive Recurrent: If µjj is finite. Note that a finite Markov chain must have at least one
positive recurrent state. If the Markov chain is both finite and irreducible, then all states
are positive recurrent.
Any state which is positive recurrent and aperiodic (period = 1) is called an ergodic state.
2. Transient State: A state is said to be a transient state if Fjj < 1. (i.e. return to it is not
guaranteed)

5
3.8 Stationary and Limiting Distributions
Let π(n) be defined as,
π(n) = [P (Xn = 0), P (Xn = 1), ...]
The distribution of π(n) as n → ∞ (if it exists) is called the limiting distribution and denoted by π.
If the limiting distribution exists, it is the same as the stationary distribution and given by
πP = π
Alternatively we may find it by finding the limiting matrix P n as n → ∞.

4 Continuous Time Markov Chains


4.1 CTMC
A discrete state, continuous time stochastic process is called a CTMC if for some 0 < t0 < t1 < ... <
tn < t it satisfies,
P (X(t) = x/X(t0 ) = x0 , ...X(tn ) = xn ) = P (X(t) = x/X(tn ) = xn ) ∀ n

4.2 Stationary Transition Probability


We define the stationary transition probability as Pij (t) = P (X(t) = j/X(0) = i). The initial state
probability vector is given by π(0) = [π0 (0), π1 (0), ...], and at a time t it is given by,
X
πj (t) = πi (0)Pij (t)
i

4.3 Infinitesimal Generator Matrix


The time to transition from state i in one step is exponentially distributed with parameter λi (say).
In this case the infinitesimal generator matrix Q = [qij ] is given as,
(
λi pij i ̸= j
qij =
−λi i = j
Here, pij is the entry of the transition matrix. Note that Q satisfies:
• qij ≥ 0 for i ̸= j

P
j qij = 0

4.4 Kolmogorov Equations


1. Forward Kolmogorov Equations: P ′ (t) = P (t)Q
2. Backward Kolmogorov Equations: P ′ (t) = QP (t)
We can get the state probability vector at any time using π(t) = π(0)P .

4.5 Limiting and Stationary Distributions


The probability vector as t → ∞ is called the limiting distribution. For an irreducible, +ve recurrent
CTMC the limiting distribution exists. If the limiting distribution exists, it is the same as the
stationary distribution given by
πQ = 0
P
and also satisfying i πi = 1.

6
5 Poisson Process
A stochastic process {N (t)} is a Poisson process with rate λ if it satisfies:

• It starts from 0
• Increments are stationary and independent
• For every t > 0, N (t) ∼ Poisson(λt)
The inter-arrival time of a Poisson process are exponentially distributed with the same parameter λ.

6 Queueing Models
A model in which there customers who arrive into the queue, and are serviced by one or more servers.
The customers leave after taking the service.

6.1 Kendall Notation


Queueing Models are classified and denoted using the Kendall Notation as,

A/B/X/Y/Z

Here,
• A: Distribution of inter arrival times
• B: Service time distribution

• X: No of servers in system
• Y: Max. no of customers
• Z: Queue discipline (eg. first come first serve etc)
If we do not mention Y we take it to be infinite. We can deduce quantities of interest of different
queueing models by solving Kolmogorov differential equations.

6.2 Little’s Law


In the long run,
E(N ) = λE(R)
i.e. the average time spent in the system times the arrival rate gives the average no. of people in the
system.

6.3 M/M/1 Queueing Model


Arrivals are Poisson with parameter λ and service time is exponential with parameter µ. There is a
single server and infinite capacity. Define ρ = µλ . Properties:

• Steady state distribution: πn = (1 − ρ)ρn


ρ2
• Average no. of customers:
P
n nπn = 1−ρ

• Server Utilization: 1 − π0 = ρ

7
• Average response time: 1
µ−λ

• Average waiting time: ρ


µ−λ

• Waiting time distribution: fW (t) = ρ(µ − λ)e−(µ−λ)t


• Response time distribution: fR (t) ∼ exp(µ(1 − ρ))

6.4 M/M/c Queueing Model


Arrivals are Poisson with parameter λ and service time is exponential with parameter µ. There are c
servers and infinite capacity. BDP with linear death rates and constant birth rates, i.e.
(
nµ 1 ≤ n < c
µn =
cµ n ≥ c

Steady state distribution:


ρn
(
n! π 0 1≤n<c
πn = ρn
cn−c c! π0 n≥c
P
p0 can be deduced by using i πi = 1. Expected no. of busy servers is cρ.

6.5 M/M/1/N
Similar to M/M/1 but customers who find the queue to be full are rejected. Effective arrival rate is
given by λeff = λ(1 − πN ).

You might also like