×
Log in to StudySoup
Get Full Access to Introduction To Probability, - 2 Edition - Chapter 7 - Problem 35
Join StudySoup for FREE
Get Full Access to Introduction To Probability, - 2 Edition - Chapter 7 - Problem 35

Already have an account? Login here
×
Reset your password

The strong law of large numbers for Markov chains.

Introduction to Probability, | 2nd Edition | ISBN: 9781886529236 | Authors: Dimitri P. Bertsekas John N. Tsitsiklis ISBN: 9781886529236 227

Solution for problem 35 Chapter 7

Introduction to Probability, | 2nd Edition

  • Textbook Solutions
  • 2901 Step-by-step solutions solved by professors and subject experts
  • Get 24/7 help from StudySoup virtual teaching assistants
Introduction to Probability, | 2nd Edition | ISBN: 9781886529236 | Authors: Dimitri P. Bertsekas John N. Tsitsiklis

Introduction to Probability, | 2nd Edition

4 5 1 407 Reviews
31
4
Problem 35

The strong law of large numbers for Markov chains. Consider a finite-state Markov chain in which all states belong to a single recurrent class which is aperiodic. For a fixed state s, let Yk be the time of the kth visit to state s. Let also Vn be the number of visits to state s during the first n transitions. (a) Show that Yk /k converges with probability 1 to the mean recurrence time t; of state s. (b) Show that Vn/n converges with probability 1 to 1ft;. (c) Can you relate the limit of Vn/n to the steady-state probability of state s? Solution. (a) Let us fix an initial state i, not necessarily the same as s. Thus, the random variables Yk+l - Yk, for k 2:: 1, correspond to the time between successive visits to state s. Because of the Markov property (the past is independent of the future, given the present), the process "starts fresh" at each revisit to state s and, therefore, the random variables Yk+l - Yk are independent and identically distributed, with mean equal to the mean recurrence time t;. Using the strong law of large numbers, we obtain with probability 1. (b) Let us fix an element of the sample space (a trajectory of the Markov chain). Let Yk and Vn be the values of the random variables Yk and Vn, respectively. Furthermore, let us assume that the sequence Yk/ k converges to t;; according to the result of part (a), the set of trajectories with this property has probability 1. Let us consider some n between the time of the kth visit to state s and the time just before the next visit to that state: Yk n < Yk+! - For every n in this range, we have Vn = k, and also from which we obtain Note that --1 1 1 < - -, Yk+! n Yk k _ < Vn < .!:.. . Yk+! - n - Yk lim k _ = lim k + 1 . lim k_ = lim .!:.. = .!.. k-oo Yk+! k-oo Yk+! k-oo k + 1 k_oo Yk t; If we now let n go to infinity, the corresponding values of k, chosen to satisfy Yk n < Yk+! also go to infinity. Therefore, the sequence vn/n is between two sequences both of which converge to l/t;, which implies that the sequence vn/n converges to l/t; as well. Since this happens for every trajectory in a set of trajectories that has probability equal to 1, we conclude that Vn/n converges to l/t;, with probability 1. (c) We have l/t; = 1rs , as established in 35. This implies the intuitive result that Vn/n converges to 1rs , with probability 1. Note: It is tempting to try to establish the convergence of Vn/n to 1r1J by combining the facts that Vn/n converges [part (b)] together with the fact that E[Vnl/n converges to 1rs (cf. the long-term expected frequency interpretation of steady-state probabilities in Section 7.3) . However, this line of reasoning is not valid. This is because it is generally possible for a sequence Yn of random variables to converge with probability 1 to a constant, while the expected values converge to a different constant. An example is the following. Let X be uniformly distributed in the unit interval [0, 1]. let Yn = {O, n, if X l/n, if X < l/n. As long as X is nonzero (which happens with probability 1), the sequence Yn converges to zero. On the other hand, it can be seen that 1 n 1 E[Yn] = P(X < l/n) E[Yn IX < l/n] = . 2' = 2'

Step-by-Step Solution:
Step 1 of 3

Lecture 11: Difference in Population Proportions 1. Independent Samples Scenario -­‐ Two samples are independent samples when the measurements in one sample are not related to the measurements in the other sample. -­‐ Example: 1. Random samples taken separately from 2 populations and same response variable is recorded for each individual. 2. One random sample is taken and a variable is recorded 3. Participants randomly assigned

Step 2 of 3

Chapter 7, Problem 35 is Solved
Step 3 of 3

Textbook: Introduction to Probability,
Edition: 2
Author: Dimitri P. Bertsekas John N. Tsitsiklis
ISBN: 9781886529236

Since the solution to 35 from 7 chapter was answered, more than 244 students have viewed the full step-by-step answer. The answer to “The strong law of large numbers for Markov chains. Consider a finite-state Markov chain in which all states belong to a single recurrent class which is aperiodic. For a fixed state s, let Yk be the time of the kth visit to state s. Let also Vn be the number of visits to state s during the first n transitions. (a) Show that Yk /k converges with probability 1 to the mean recurrence time t; of state s. (b) Show that Vn/n converges with probability 1 to 1ft;. (c) Can you relate the limit of Vn/n to the steady-state probability of state s? Solution. (a) Let us fix an initial state i, not necessarily the same as s. Thus, the random variables Yk+l - Yk, for k 2:: 1, correspond to the time between successive visits to state s. Because of the Markov property (the past is independent of the future, given the present), the process "starts fresh" at each revisit to state s and, therefore, the random variables Yk+l - Yk are independent and identically distributed, with mean equal to the mean recurrence time t;. Using the strong law of large numbers, we obtain with probability 1. (b) Let us fix an element of the sample space (a trajectory of the Markov chain). Let Yk and Vn be the values of the random variables Yk and Vn, respectively. Furthermore, let us assume that the sequence Yk/ k converges to t;; according to the result of part (a), the set of trajectories with this property has probability 1. Let us consider some n between the time of the kth visit to state s and the time just before the next visit to that state: Yk n < Yk+! - For every n in this range, we have Vn = k, and also from which we obtain Note that --1 1 1 < - -, Yk+! n Yk k _ < Vn < .!:.. . Yk+! - n - Yk lim k _ = lim k + 1 . lim k_ = lim .!:.. = .!.. k-oo Yk+! k-oo Yk+! k-oo k + 1 k_oo Yk t; If we now let n go to infinity, the corresponding values of k, chosen to satisfy Yk n < Yk+! also go to infinity. Therefore, the sequence vn/n is between two sequences both of which converge to l/t;, which implies that the sequence vn/n converges to l/t; as well. Since this happens for every trajectory in a set of trajectories that has probability equal to 1, we conclude that Vn/n converges to l/t;, with probability 1. (c) We have l/t; = 1rs , as established in 35. This implies the intuitive result that Vn/n converges to 1rs , with probability 1. Note: It is tempting to try to establish the convergence of Vn/n to 1r1J by combining the facts that Vn/n converges [part (b)] together with the fact that E[Vnl/n converges to 1rs (cf. the long-term expected frequency interpretation of steady-state probabilities in Section 7.3) . However, this line of reasoning is not valid. This is because it is generally possible for a sequence Yn of random variables to converge with probability 1 to a constant, while the expected values converge to a different constant. An example is the following. Let X be uniformly distributed in the unit interval [0, 1]. let Yn = {O, n, if X l/n, if X < l/n. As long as X is nonzero (which happens with probability 1), the sequence Yn converges to zero. On the other hand, it can be seen that 1 n 1 E[Yn] = P(X < l/n) E[Yn IX < l/n] = . 2' = 2'” is broken down into a number of easy to follow steps, and 612 words. Introduction to Probability, was written by and is associated to the ISBN: 9781886529236. This full solution covers the following key subjects: . This expansive textbook survival guide covers 9 chapters, and 326 solutions. This textbook survival guide was created for the textbook: Introduction to Probability,, edition: 2. The full step-by-step solution to problem: 35 from chapter: 7 was answered by , our top Statistics solution expert on 01/09/18, 07:43PM.

Other solutions

People also purchased

Related chapters

Unlock Textbook Solution

Enter your email below to unlock your verified solution to:

The strong law of large numbers for Markov chains.