The strong law of large numbers for Markov chains. Consider a finite-state Markov chain in which all states belong to a single recurrent class which is aperiodic. For a fixed state s, let Yk be the time of the kth visit to state s. Let also Vn be the number of visits to state s during the first n transitions. (a) Show that Yk /k converges with probability 1 to the mean recurrence time t; of state s. (b) Show that Vn/n converges with probability 1 to 1ft;. (c) Can you relate the limit of Vn/n to the steady-state probability of state s? Solution. (a) Let us fix an initial state i, not necessarily the same as s. Thus, the random variables Yk+l - Yk, for k 2:: 1, correspond to the time between successive visits to state s. Because of the Markov property (the past is independent of the future, given the present), the process "starts fresh" at each revisit to state s and, therefore, the random variables Yk+l - Yk are independent and identically distributed, with mean equal to the mean recurrence time t;. Using the strong law of large numbers, we obtain with probability 1. (b) Let us fix an element of the sample space (a trajectory of the Markov chain). Let Yk and Vn be the values of the random variables Yk and Vn, respectively. Furthermore, let us assume that the sequence Yk/ k converges to t;; according to the result of part (a), the set of trajectories with this property has probability 1. Let us consider some n between the time of the kth visit to state s and the time just before the next visit to that state: Yk n < Yk+! - For every n in this range, we have Vn = k, and also from which we obtain Note that --1 1 1 < - -, Yk+! n Yk k _ < Vn < .!:.. . Yk+! - n - Yk lim k _ = lim k + 1 . lim k_ = lim .!:.. = .!.. k-oo Yk+! k-oo Yk+! k-oo k + 1 k_oo Yk t; If we now let n go to infinity, the corresponding values of k, chosen to satisfy Yk n < Yk+! also go to infinity. Therefore, the sequence vn/n is between two sequences both of which converge to l/t;, which implies that the sequence vn/n converges to l/t; as well. Since this happens for every trajectory in a set of trajectories that has probability equal to 1, we conclude that Vn/n converges to l/t;, with probability 1. (c) We have l/t; = 1rs , as established in 35. This implies the intuitive result that Vn/n converges to 1rs , with probability 1. Note: It is tempting to try to establish the convergence of Vn/n to 1r1J by combining the facts that Vn/n converges [part (b)] together with the fact that E[Vnl/n converges to 1rs (cf. the long-term expected frequency interpretation of steady-state probabilities in Section 7.3) . However, this line of reasoning is not valid. This is because it is generally possible for a sequence Yn of random variables to converge with probability 1 to a constant, while the expected values converge to a different constant. An example is the following. Let X be uniformly distributed in the unit interval [0, 1]. let Yn = {O, n, if X l/n, if X < l/n. As long as X is nonzero (which happens with probability 1), the sequence Yn converges to zero. On the other hand, it can be seen that 1 n 1 E[Yn] = P(X < l/n) E[Yn IX < l/n] = . 2' = 2'

Lecture 11: Difference in Population Proportions 1. Independent Samples Scenario -‐ Two samples are independent samples when the measurements in one sample are not related to the measurements in the other sample. -‐ Example: 1. Random samples taken separately from 2 populations and same response variable is recorded for each individual. 2. One random sample is taken and a variable is recorded 3. Participants randomly assigned