0% found this document useful (0 votes)
67 views6 pages

Markov Chains

About MCMC
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
67 views6 pages

Markov Chains

About MCMC
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 6
Markov Chains ‘A Markov chain is a mathematical system that experiences transitions from one stateto another according to certain probabilistic rules. The defining characteristic of a Markov chain is that no matter how the process arrived at its present state, the possible future i states are fixed. In other words, the probability of transitioning to any particular state is o— dependent solely on the current state and time elapsed. The state space, or set of all “oy possible states, can be anything: letters, numbers, weather conditions, baseball scores, or stock performances. Markov chains may be modeled by finite state machines, and random walks provide a prolific example of their useful mathematics. They arise broadly in statistical and information-theoretical contexts and are widely employed in econe game theory, queueing (communication) theory, genetics, and finance. While it is possible to discuss Markov chains size of state space, the initial theory and most applications are focused on cases with a finite (or countably infinite) ni states. Many uses of Markov chains require proficiency with common matrix methods. Contents Basic Concept ‘A Markov chain is a stochastic process, but it differs from a general stochastic process in that a Markov chain must be less." That is, (the probability of) future actions are not dependent upon the steps that led up to the present state. Tr the Markov property. While the theory of Markov chains is important precisely because so many "everyday" processe the Markov property, there are many common examples of stochastic properties that do not satisfy the Markov props ‘common probability question asks what the probability of getting a certain color ball is when selecting uniformly random from a bag of multi-colored balls. It could also ask what the probability of the next ball is, and so on. In suc stochastic process begins to exist with color for the random variable, and it does not satisfy the Markov property. D. upon which balls are removed, the probability of getting a certain color ball later may be drastically different. Stochastic Process Bag of Balls Random Variable Possible States:@ @@ ‘A variant of the same question asks once again for ball color, but it allows replacement each time a ball is drawn. O again, this creates a stochastic process with color for the random variable. This process, however, does satisfy the property. Can you figure out why? Markov Chain Bog of Balls with replacenent! Randor Variable Possible States: @ @@ In probability theory, the most immediate example is that of a time- homogeneous Markov chain, in which the proba state transition is independent of time. Such a process may be visualized with a labeled directed graph, for which the labels of any vertex’s outgoing edges is 1 {A (time-homogeneous) Markov chain built on states A and B is depicted in the diagram below. What is the probabi process beginning on A will be on B after 2 moves? In order to move from A to B, the process must either stay on A the first move, then move to B the second move; or B the first move, then stay on B the second move. According to the diagram, the probability of that is 0.3 - 0.7 + 0. 0.2 = [0.35]. Alternatively, the probability that the process will be on A after 2 moves is 0.3 - 0.3 + 0.7 - 0.8 = 0.65. Since there two states in the chain, the process must be on B if itis not on A, and therefore the probability that the process will after 2 moves is 1 — 0.65 = [0.35 a In the language of conditional probability and random variables, a Markov chain is a sequence Xy, X1, Xz, ...oFr variables satisfying the rule of conditional independence: The Markov Property. For any positive integer n and possible states ig, i1, ..-, in of the random variables, P(Xy = an | Xp = in) = P(Xp = in | Xo = toy Xi = ty Xn In other words, knowledge of the previous state is all that is necessary to determine the probability distribution of the state. This definition is broader than the one explored above, as it allows for non-stationary transition probabilities at therefore time-inhomogeneous Markov chains; that is, as time goes on (steps increase), the probability of moving fre state to another may change. Transition Matrices A transition matrix P; for Markov chain {X} at time t is a matrix containing information on the probability of transit between states. In particular, given an ordering of a matrix's rows and columns by the state space S, the (i, j)"* eler matrix P, is given by (Pig =PXn =5| Xe This means each row of the matrix is a probability vector, and the sum of its entries is 1. Transition matrices have the property that the product of subsequent ones describes a transition along the time inte: spanned by the transition matrices. That is to say, P) - P; has in its (i, 7)" position the probability that X» = j give Xp = i. And, in general, the (i, j)"* position of P; - Pay «+++» Pigs the probability P(Xt.x41 = 7 | Xe = i). Prove that, for any natural number t and states i, j € S, the matrix entry (P;- Prya)iy = P(Xiv2 = 5 | Xi = 4) Denote M = P, - P,1. By matrix multiplication, Mis = (Pia Peres = PX = b | Xe = )P(Xer2 = 5 | Xe =k) r= P(Xi2 =5 | Xe =a) The final equality follows from conditional probability. 5 The k-step transition matrix is Pl"! = P; - P,y1 ++ Pky -1 and, by the above, satisfies P(Xie = 1) X= 1) P(Xe=2| X= 1)... P(Xuwsn| X= 1) wy {Per =) X= 2) P= 2) X=2) PX =n| X= 2) PMa : : : : PXee=1|Xe=n) Plea =2|Xe=n) .. PKyx =n | Xe=n), For the time-independent Markov chain described by the picture below, what is its 2-step transition matrix? — aca TAL Note the transition matrix is It follows that the 2-step transition matrix is pr_ (93 07), (03 0.7 0.3-0.34+0.7-0.9 0.3-0.7+0.7-0.1 0.72 0.28 0.9 0.1)*\09 0.1 0.9-0.34+0.1-0.9 0.9-0.7+0.1-0.1 0.36 0.64, ‘A Markov chain has first state A and second state B, and its transition probabilities for all time are given by the following graph: What is its transition matrix? Note: The transition matrix is oriented such that the k*" row represents the set of probabilities of transitioning from state k to another state. 0.3 e ots See) a 01 ‘Once people arrive in Thailand, they want to enjoy the sun and beaches on 2 popular islands in the south: Samui Island & Phangan Island. From survey data, when on the mainland, 70% of tourists plan to go to Samui Island, 20% to Phangan Island, and only 10% remain on shore the next day. When on Samui Island, 40% continue to stay on Samui, 50% plan to go to Phangan Island, and only 10% return to mainland the next day. Finally, when on Phangan Island, 30% prolong their stay here, 30% divert to Samui Island, ‘and 40% go back to mainland the next day. Starting from the mainland, what is the probability (in percentage) that the travelers will be fon the mainland at the end of a 3-day trip? Properties A variety of descriptions of either a specific state in a Markov chain or the entire Markov chain allow for better unders the Markov chain's behavior. Let P be the transition matrix of Markov chain {Xo, Xz, .-- } + Astate i has period k > 1 if any chain starting at and returning to state i with positive probability must take a nur steps divisible by k. If k = 1, then the state is known as aperiodic, and if kt > 1, the state is known as periodic. If¢ are aperiodic, then the Markov chain is known as aperiodic. * A Markov chain is known as irreducible if there exists a chain of steps between any two states that has positive pro * An absorbing state i is a state for which P,; = 1. Absorbing states are crucial for the discussion of absorbing Mar chains. * A state is known as recurrent or transient depending upon whether or not the Markov chain will eventually return t recurrent state is known as positive recurrent if itis expected to return within a finite number of steps, and null rec otherwise. * Astate is known as ergodic if it is positive recurrent and aperiodic. A Markov chain is ergodic if all its states are. Irreducibility and periodicity both concern the locations a Markov chain could be at some later point in time, given wi started. Stationary distributions deal with the likelihood of a process being in a certain state at an unknown point of t Markov chains with a finite number of states, each of which is positive recurrent, an aperiodic Markov chain is the sar irreducible Markov chain. The graphs of two time-homogeneous Markov chains are shown below. Neither Markov chai Iredale O Sramzis no epee Chain is aperiodic: ireducble, and chal aperiodic. & oF ® Chain is aperiodic, isieducble Both Markov chains and reducible Ooo oO Determine facts about their periodicity and reducibility, See Also Markov chain subpages: * Stationary Distributions ‘© Ergodic Markov Chains * Absorbing Markov Chains ‘© Transience and Recurrence Miscellaneous: * Eigenvectors * Finite State Machines © Matrices ‘+ Random Walks

You might also like