INTRO TO STOCHASTIC PROCESSES
INTRO TO STOCHASTIC PROCESSES M 362M
Popular in Course
Popular in Mathematics (M)
This 3 page Class Notes was uploaded by Reyes Glover on Sunday September 6, 2015. The Class Notes belongs to M 362M at University of Texas at Austin taught by Staff in Fall. Since its upload, it has received 44 views. For similar materials see /class/181519/m-362m-university-of-texas-at-austin in Mathematics (M) at University of Texas at Austin.
Reviews for INTRO TO STOCHASTIC PROCESSES
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 09/06/15
COURSE M362M Introduction to Stochastic Processes I PAGE lof 3 UNIVERSITY OF TEXAS AT AUSTIN Review Problems 11 MARKOV CHAINS ABSORPTION AND REWARD Problem 11 Let XnneNO be a Markov chain with the following transition matrix 12 12 0 P 13 13 13 0 0 1 Suppose that the chains starts from the state 11 1 What is expected time that will pass before the chain rst hits 3 2 What is the expected number of visits to state 2 before 3 is hit 3 Would your answers to l and 2 change if we replaced values in the third row of P by any other values as long as P remains a stochastic matrix Would I and 2 still be transient states 4 Use the idea of part 3 to answer the following question What is the expected number of visits to the state 2 before a Markov chain with transition matrix 1720 120 110 P 115 1315 115 25 415 13 hits the state 3 for the rst time the initial state is still 1 Remember this trick for the nall Solution The states 1 and 2 are transient and 3 is recurrent so the canonical decomposition is 3 Ul 2 and the canonical form of the transition matrix is 1 0 0 P 0 12 12 13 13 13 The matrices Q and R are given by Qli i 1317111331 and the fundamental matrix F I 7 Q 1 is 4 3 F 2 3 I l The reward function g l g 2 1 ie 9 1 will give us the expected time until absorption 1 4 3 I 7 1 L4HHA Since the initial state is i l the expected time before we rst hit 3 is v1 71 2 Here we use the reward function 0 239 1 92 7 1 121 4 3 0 3 1 L4HMv so the answer is v1 31 The answer is No they would notlll Indeed these numbers only affect what the chain does after it hits 3 for the rst time and that is irrelevant for calculations about the events which happen prior to that No all states would be recurrentl The moral of the story that the absorption calculations can be used even in the settings where all states are recurrentl You simply need to adjust the probabilities as shown in the following part of the probleml to get INSTRUCTOR Gordan Zitkovic SEMESTER Fall 2008 COURSE M362M Introduction to Stochastic Processes 1 PAGE 2 of 3 3 We make the state 3 absorbing and states 1 and 2 transient by replacing the transition matrix by the following one 1720 120 110 P 115 1315 115 The new chain behaves exactly like the old one until it hits 3 for the rst time Now we nd the canonical decomposition and compute the matrix F as above and get 8 3 F l4 91 7 so that the expected number of visits to 2 with initial state 1 is F12 3 Problem 12 A math professor has 4 umbrellas He keeps some of them at home and some in the of ce Every morning when he leaves home he checks the weather and takes an umbrella with him if it rains In case all the umbrellas are in the of ce he gets wet The same procedure is repeated in the afternoon when he leaves the of ce to go home The professor lives in a tropical region so the chance of rain in the afternoon is higher than in the morning it is 15 in the afternoon and 120 in the morning Whether it rains of not is independent of whether it rained the last time he chec e 1 On day 0 there are 2 umbrellas at home and 2 in the of ce What is the expected number of days that will pass before the professor gets wet remember there are two trips each day What is the probability that the rst time he gets wet it is on his way home from the of ce Solution We model the situation by a Markov chain whose state space S is given by Po 15 5 1011 I 10 6 H707u 6 017237410 where the rst coordinate denoted the current posi tion of the professor and the second the number of umbrellas at home then we automatically know how many umbrellas there are at the of ce The second coordinate w stands for wet and the state H w means that the professor left home without an um brella during a rain got wet The transitions between if I h 1 the states are simple to gure out For example from 232a1munxkuni 39 mun 139 the state H2 we either move to 02 with proba bility 1920 or to 01 with probability 120 and SaesRaln 0 H 0 O 1 H 1 O 2 H from 04 we move to 0111 with probability 15 20 311 30 411 40 1111 wo and to H 4 with probability 45 States H w and Q NH 10 1 H 10 0 w are made absorbing and so all the other states F Inverse den w at ml 91 n o n 2 are trans1ent The rst question can be reformulated G iigeHL39 11115 as a reward problem w1th reward g E 1 and the sec Imuaiscace SaeToPosl10n2 nuquot Ram ond one is about absorption We use M 7 39 to mm solve it and the code is on the right The answers are 011109791 0339022 the probability of getting wet for the rst time on the way home from the of ce is about 99 The expected number of days before getting wet is about 20 ExpectedTlme NFGInxcxalscace1112 1957 Problem 13 A zoologist Dr Gurkensaft claims to have trained Basil the Rat so that it can aviod being shocked and nd food even in highly confusing situations Another scientist Dr Hasenpfeffer does not agree She says that Basil is stupid and cannot tell the difference between food and an electrical shocker until it gets very close to either of them INSTRUCTOR Gordan Zitkovic SEMESTER Fall 2008 COURSE M362M Introduction to Stochastic Processes 1 PAGE 30f 3 The two decide to see who is right by performing the following experiment Basil is put in the compartment 3 of a maze that looks like this Dr Gurkensaft s hypothsis is that once in a compartment with k exits Basil will prefer the exits that lead him closer to the food Dr Hasenpfefferls claim is that every time there are k exits from a compartment Basil chooses each one with probability lk After repeating the experiment 100 times Basil got shocked before getting to food 52 times and he reached food before being shocked 48 times Compute the theoretical probabilities of being shocked before getting to food under the assumption that Basil is stupid all exits are equally likely Compare those to the observed data Whose side is the evidence on Solution The task here is to calculate the absorption probability for a Markov chain which represents the maze Using Mathematica we can do it in the following way In30 Transitions 1 2 12 1 3 12 2 1 13 2 quotFoodquot 13 2 4 13 3 1 13 3 4 13 3 quotShockquot 13 4 3 13 4 2 13 4 5 13 5 4 12 5 quotFoodquot 12 Initial 3 1 Maze BuildChainTransitions Initial ln3339 StatesMaze Ou133 l 2 3 4 5 Food Shock In34 P TransitionMatrix Maze QPl 5 1 5 RPl 5 6 7 F InverseIdentityMatrix5 Q may NFR 31 0mm 0416667 0583333 Therefore the probability of getting to food before being shocked is about 42 This is somewhat lower than the observed 48 and even though there may be some truth to Dr Gurkensaft7s claims these two numbers are not very different Note For those of you who know a bit of statistics you can easily show that we cannot reject the null hypothesis that Basil is stupid in the precise sense described in the problem even at the 90 signi cance level In fact the onesided p value using a binomial test is a bir larger than 01 That means that a truly stupid rat would appear smarter than Basil 10 of the time by pure chance INSTRUCTOR Gordan Zitkovic SEMESTER Fall 2008
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'