0% found this document useful (0 votes)
86 views

Continuous Markov Chain

The document discusses various topics related to Markov chains including absorbing states, examples of absorbing states in gambling and credit evaluation, and continuous time Markov chains covering related random variables, steady-state probabilities, and examples. Absorbing states are states where the chain remains forever once reached, and the document provides the definition and calculations for absorption probabilities. Examples are presented on absorbing states in gambling and calculating steady state probabilities in continuous time Markov chains.

Uploaded by

Sachin Baweja
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views

Continuous Markov Chain

The document discusses various topics related to Markov chains including absorbing states, examples of absorbing states in gambling and credit evaluation, and continuous time Markov chains covering related random variables, steady-state probabilities, and examples. Absorbing states are states where the chain remains forever once reached, and the document provides the definition and calculations for absorption probabilities. Examples are presented on absorbing states in gambling and calculating steady state probabilities in continuous time Markov chains.

Uploaded by

Sachin Baweja
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Topics for Today’s Class

1. Absorbing States
2. Examples on Absorbing States (Gambling and
Credit Evaluation)
3. Continuous Time Markov Chain
a. Random Variables related to Continuous Time
Markov Chain
b. Steady-State Probabilities
c. Examples on Continuous Time Markov
Chain
Absorbing States

A state is said to be absorbing state if P  1,


kk

so that once the chain visits k, it remains


there forever.

Probability of Absorption:
If k is an absorbing state, and the process
starts from state i, the probability of ever
going to state k is called the probability of
absorption into state k, given that the
system started in state i.
Example on Absorbing State:
.
Absorbing States
a. f ikrepresents probability of absorption
into the state k given that the system
started in state i.
b. If there are two or more absorbing
states in a Markov chain, then process
will go into one of them
c. Find these probability of absorption
Absorbing States
d. Like regular Markov Chains, absorbing Markov
Chains have the property that powers of the transition
matrix approaches a limiting behavior.

e. Identify the absorbing states for the following


transition matrix:

1 0 0  0 0 1
P  .3 .7 0  , 0 1 0 
0 .2 .8 1 0 0 
Absorbing States
The absorbing probability can be obtained by solving
below system of linear equations:
M
f ik   pijf jk , for i  0,1, 2,..., M
j 0

Subject to the conditions


f kk  1,
f ik  0, if state i is recurrent and i  k.

Note: Absorption probabilities are important in


random walk.
Gambling Example:
Remark: The presence of an absorbing state in a
transition matrix does not guarantee that the powers
of the matrix approach a limiting matrix. (See
example 2).
0 0 1
0 1 0 
 
1 0 0 
Gambling Example:
Suppose that two players A and B, each having $2,
agree to keep playing the game and betting $1 at a
time until one player is broken. The probability of A
winning a single bet is 1/3, so B wins the bet with
probability 2/3.

Calculate:
 Probability of Absorption in State 0 from State2
 Probability of Absorption in State 4 from State2
Solution:
1. Starting from state 2, the probability of absorption into
state 0 can be obtained by solving f20 from the below
equations:
f 00  1(sin ce 0 is an absorbing state)
2 1 2 1
f10  f 00  f 20 , f 20  f10  f 30
3 3 3 3
2 1
f 30  f 20  f 40 ,
3 3
f 40  0 (Since state 4 is an absorbing state)
The above equations yields
2 2 1  1 2  4 4
f 20    f 20    f 20     f 20 
3 3 3  3 3  9 9
4
which reduces to f 20  as the probability of absorption int o state0.
5
Solution:
2. The probability of A finishing with 4 dollar when starting with 2
dollar is obtained by solving for f24 from the system of equations:

f 04  0(sin ce 0 is an absorbing state)


2 1 2 1
f14  f 04  f 24 , f 24  f14  f 34
3 3 3 3
2 1
f 34  f 24  f 44 ,
3 3
f 44  1 (Since state 0 is an absorbing state)
The above equations yields
1 2 1 21  1 4
f 24   f 24     f 24     f 24 
3 3 3 33  9 9
1
which reduces to f 24  as the probability of absorption int o state 4.
5
Continuous Time Markov Chains
Formulation: If the possible state of the systems is 0,1,2,3,…,M.
Starting at time 0 and time parameter t runs for all t ≥0.
The random variable X(t)---will take on one of its possible (M+1)
values over some interval, 0≤t≤t1 and then will jump to another value
over the next interval t1≤ t≤t2. Evaluation of the process is being
observed continuously over time.
Markovian Property (Continuous Case)
A continuous time stochastic process has the
Markovian property if

is called transition probability.


Continuous Time Markov Chains

Stationary transition Probability:


If the transition probabilities are independent of s, so that

   
P X  t  s   j X  s   i  P X  t   j X  0   i for all s  0,
or
 
pij  t   P X  t   j X  0   i

Then they are called stationary transition probabilities. pij  t 


is referred to as the Continuous time transition probability
function.
1 if i  j
limit t  0 pij  t   
0 if i  j.
Steady State Probabilities
The continuous time transition probability function
satisfies following Kolmogorov equations.

pij  t    pik  s  p kj  t  s ,for any state i and j


M

k 1

and t,s are non  negative numbers.

and limit t   pij  t    j always exist and is independent


of the initial state of the Markov Chain, for j=0,1,2,…,M.
These limiting probabilities are commonly referred to as
the steady-state probabilities of the Markov Chain. where
 j is the steady state probability of being process in state
j.
Steady State Probabilities
The steady state probability j satisfies the
following equations:
 j    i pij  t , for j  0,1,..., M and every t  0.
M

i 0
M
 jq j    i q ij , for j  0,1,..., M and every t  0.
i j
M
and   j  1.
J 0

Where q j  is the transition rate out of state j


given that process is in state j.
q ij  is the transition rate fromstate i to state j
given that process is in state j.
Steady State Probabilities
Intuitive Interpretation of steady-state equation:

1. The steady-state equation for state j has an intuitive interpretation.


The left-hand side  jq j  is the rate at which the process leaves
state j, since  j is the (steady-state) probability that the process is
in state j and q j is the transition rate out of state j given that the
process is in state j.

1. Similarly, each term on the right-hand side  jq ij  is the rate at


which the process enters state j from state i, since q ij is the
transition rate from state i to state j given that the process is in
state i. By summing over all i  j the entire right-hand side then
gives the rate at which the process enters state j from any other
state. The overall equation thereby states that the rate at which the
process leaves state j must equal the rateat which the process
enters state j.
Solution (continuous Time Markov Chains)

 Rate diagram for the example of a continuous time


Markov chain.

These transition rates are summarized in the rate


diagram shown above. These rates now can be
used to calculate the total transition rate out of
each state.
Solution (continuous Time Markov Chains)

Plugging all the rates into the steady-state equations gives

Any one of the balance equations (say, the second) can be deleted as
redundant, and the simultaneous solution of the remaining equations gives
the steady-state distribution as

Thus, in the long run, both machines will be broken down simultaneously
20 percent of the time, and one machine will be broken down another 40
percent of the time.

You might also like