Advanced Probability and Statistical Inference I
Advanced Probability and Statistical Inference I BIOS 760
Popular in Course
Popular in Biostatistics
This 63 page Class Notes was uploaded by Nat McClure on Sunday October 25, 2015. The Class Notes belongs to BIOS 760 at University of North Carolina - Chapel Hill taught by Michael Kosorok in Fall. Since its upload, it has received 40 views. For similar materials see /class/228852/bios-760-university-of-north-carolina-chapel-hill in Biostatistics at University of North Carolina - Chapel Hill.
Reviews for Advanced Probability and Statistical Inference I
Report this Material
What is Karma?
Karma is the currency of StudySoup.
You can buy or earn more Karma at anytime and redeem it for class notes, study guides, flashcards, and more!
Date Created: 10/25/15
CHAPTER 5 AJAXIM UM LIKELIHOOD ESTIAIATION Introduction to Ef cient Estimation a Goal MLE is asymptotically ef cient estimator under some regularity conditions CHAPTER 5 AJAXIM UM LIKELIHOOD ESTIAIATION a Basic setting Suppose X1 Xn are iid from P90 in the model 79 A0 6 y 6 implies P9 y P996 identi ability A1 P9 has a density function pa with respect to a dominating a nite measure u A2 The set as 1995 gt 0 does not depend on 6 CHAPTER 5 AJAXIM UM LIKELIHOOD ESTIAIATION o MLE de nition n W 11mm W imam i1 Ln6 and ln6 are called the likelihood function and the loglikelihood function of 6 respectively An estimator gm of 60 is the maximum likelihood estimator MLE of 60 if it maximizes the likelihood function Ln6 CHAPTER 5 AJAXIM UM LIKELIHOOD ESTIAIATION Ad Hoc Arguments lt n 90 ad N0 new1 Consistency 5n gt 60 no asymptotic bias Ef ciency asymptotic variance attains ef ciency bound 601 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION o Consistency De nition 51 Let P be a probability measure and let Q be another measure on Q A With densities p and q With respect to a a nite measure u u P Q always works P 2 l and Q 2 g 1 Then the KullbackLeibler information KP Q is X KP7 Q Epilogg X 39 CHAPTER 5 AJAXIMUlVI LIKELIHOOD ESTIAIATION Proposition 51 KP Q is well de ned and KPQ 2 0 KPQ 0 ifand only ifP Q Proof By the Jensen s inequality Km c2 Epl 10g2l 2 10nglE 2l Iogc2lts2gt 2 o The equality holds if and only if pZE M qx almost surely With respect P and 629 1 gt P Q CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION 0 Why is MLE consistent A 6n maximizes ln6 1 7 1 A gt n 121 190 1 n E Z299490 Suppose 5n gt 6 Then we would eXpect to the both sides converge to E00 190 X 2 E00 1900 Xa which implies KP90 Pm g 0 From Prop 51 P00 Pass From A0 6 60 That is 5n converges to 60 K CHAPTER 5 AJAXIM UM LIKELIHOOD ESTIAIATION K Why is MLE ef cient Suppose 5n gt 60 gm solves the following likelihood or score equations Taylor expansion at 60 Zi00Xi 99W 90 i1 i1 Where 6 is between 60 and we 90 n1 1296 09 Wxg K CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION Consistency Results Theorem 51 Consistency with dominating function Suppose that a 9 is compact b log 1995 is continuous in 6 for all as c There exists a function such that Then 5n gta5 60 E90 lt 00 and logp9a g for all a and 6 10 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION Pr00f For any sample w E 2 6n is compact By Choosing a subsequence n gt 9 If Z1 9 gt E90 lm X then since TL 1 1 n EZl nCXi Z El60 Xi 7L1 gt E60l6X Z E90l90X39 gt 6 90 Done It remains to show Pn n E ZL1l nX gt E90l9X It suf ces to show Pnum E90 won a 0 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION We can even prove the following uniform convergence result 8111 PnileX E60 MXW gt 0 669 De ne Wx yp mgr laIE EeoileXW Since 9 is continuous 1pm 6 p is measurable and by the DCT E90 X 6 p decreases to E90 l9x E90 l9X 0 gt for e gt 0 for any 6 E 9 there exists a pa such that E90 87 p6 lt E CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION The union of 9 6 6 lt p9 covers 9 By the compactness of 9 there exists a nite number of 61 6m such that 9 C Uzm16 26 M lt pgi gt SUP Pnl6X E60 WOW 3 SUD PnhMX m palN eee 1 7L m sup sup E00 S Sup S 6 n 069 1 7L m gt 11m supn supgeg Pnl9X E90 l9X g 0 Similarly 11m supn supgeg Pnl9X E90 l9X Z 0 hm sup Pnuem E90 won a 0 n 669 K 13 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION Proof E60 6000 gt E90l9 X for any 9 7g 90 gt there exists a ball Ug containing 6 such that E90l90X gt E90 8111 6X 0EU9 Otherwise there exists a sequence 6 gt 6 but Ego go g E90l9knX Since 91 3 supU lgX Where U is the ball satisfying the condition lim sup E90l9kn g Ego im supl9 nX g Ego gl gt Egol90X g E90l9X contradiction 15 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION For any 6 the balls Umng covers the compact set 9 6 60 gt e gt there exists a nite covering balls U1 Um P n 90 gt 6 S P 8111 PnileXN Z PnileoX l6 00gt6 g P max Pn sup 900 2 PnUQO 137337 Q EUZ g ZPltP7L6 1 1839 9 Z Pnl00X Since Pn sup 9X gtas Eeoi sup 9 lt EQOUQO gEUZ ihe right hand side converges to zero gt n gtp 60 16 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION Asymptotic Efficiency Result Furthermore suppose that there exists a measurable 62 in a neighborhood of 60 llogp01 10gp02xl S F9391 92l 6 is consistent then W n 60 ilworlz39eux lt1 in particular 60 gtd N0 601 Theorem 53 Suppose that the model 79 P9 6 E 9 is Hellinger differentiable at an inner point 60 of 9 C Rk function F with E90F2 lt 00 such that for every 61 and If the Fisher information matrix 60 is nonsingular and 17 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION Proof For any hn gt h by the Hellinger differentiability Wn 2 WW 1 H hTz39QO in L2P90 90 ogm hnW 10gpeo 2x 10g1 Wn2 Hp hTZeo E60 Wm P Woogm hnm 10gmogt W64 0 gt Pn P ogpe hn AE 10gpeo hTZQO gtp 1 Vamo mm PgtWlogp90hnw logpeo hTzeo a 0 18 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION From Step I in proving Theorem 41 n 10 n IOgH gp60hn 71 10g p00 NEG0 10gp90hng 10gp90 gt hTI60h2 gt OP90 Choose hn 90 and hn 901 Pn Pi90 gt OP90 K 1 n 1 W Z 11 X1 EhTI60h 01990 1 11 1 quotPn OgPQOJrhnW 10gpeo h5190hn fixWm P leg nPnuogp n Iogm 3 am P11601TIlt60gt1mPn Pmeon 19 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION Compare the above tWO equations 1 A 1 T 7 wn 60 190 mn P 160 90 x may 90 Haw Van Pw39eol 01990 1 Z 0 xW n 90 I901EPn PWQO 02990 U 2O CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION Pr00f 6n solves 0 21Z5X gt 7L1 gt n 90 0191 gt may 60gt 225009 0191 Elem n n A 1 A n A 0 ZZQOX7 Zlgown 60 gm 60T Zzgf an 60 7L1 7L1 gzzm n 60gt 39009 gzwm 22 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION omputation of MLE Solve likelihood equation 2 i009 0 i1 NewtonRaphsori iteration at kth iteration 1 n 1 1 n 6kl 1 6W Zleac Zlew i1 i1 7 Note 231 2006 m 600 gt Fisher scoring algorithm 1 n o 6W W WWW a Z imam i1 23 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION o Optimize the likelihood function optimum search algorithm grid search quasi Newton method gradient decent algorithm MCMC simulation anneahng 24 CHAPTER 5 AJAXIMUlVI LIKELIHOOD ESTIAIATION M Algorithm of Missing Data is observed a commonly used algorithm is called the expectationmaximization algorithm a Framework of EM algorithm Y sz sa 3293 7 R is a vector of 01 indicating which subjects are missing not missing Then Yobs RY the density function for the observed data Yobs R K fY 9PRYde s When part of data is missing or some mis measured data 25 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION 0 Missing mechanism Missing at random assumption MAR PRY PRYobs and PRY does not depend on 6 ie the missing probability only depends on the observed data and it is informative about 6 Under MAR ma 6gtde SPltRIYi sz39s We maximize mz39s or log sz39s 26 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION 0 Details of EM algorithm We start from any initial value of 61 and use the following iterations The kth iteration consists both E step and M step Estep We evaluate the conditional expectation E Dogma elm 6W meis 10 Y WHY 6kdei3 mez39s 6kdez s E 10g membs ml 27 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION a Rationale Why EM works Theorem 55 At each iteration of the EM algorithm Iogfmbs 0W 2 Iogfmbs 6 and the equality holds if and only if 60 6W 29 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION Pr00f E 10gfltYmsYobs6ltk1gtgtmbs6lt 1ogfltyobseltk1gtgt 2 E Y0b87 6k YOb87 10g fY0bs E logfYm sYobs6ltk1gtYobsaltkgt E IogfYmsYobs 6ltkgtYobs 60 gt 10gfY0bs6k1 Z 10gfY0bs 600 The equality holds iff log fYmis Y0b878k1 log fYm7s Yobs 600 j 10gfY Wm 10gfY9 E logfm6ltk1gtYobs6ltkgt 2 E logfm 6ltkgtYobs6ltkgt 3O CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION o Incorporating Newton Raphson in EM Estep We evaluate the conditional expectation E 3 logfY39 6Y W 86 7 Ob87 and E 10gfY9Yob579 31 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION 7 Suppose a random vector Y has a rnultinornial distribution with n 197 and 1 6 1 6 1 6 6 PZBiZ Zpii Then the probability for Y y1y2y3y4 is given by n l Qy1 y2 yggy4 Suppose we observe Y 125 18 20 34 If we start with 61 05 after the convergence in the Example Newton Raphson iteration we obtain 600 06268215 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION 7 EM algorithm the full data is X has a multivariate normal distribution With n and the p 12 94 1 94 1 94 94 Y X1 X2 X3 X4 X5 34 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION K Conclusions 7 the EM converges and the result agrees with What is obtained form the Newton Raphson iteration the EM convergence is linear as 6 6n6k 6n becomes a constant When COHVGI gGHCG the convergence in the Newton Raphson iteration is quadratic in the sense 6 6n6k 6n2 becomes a constant When convergence 7 the EM is much less complex than the Newton Raphson iteration and this is the advantage of using the EM algorithm 37 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION o More example i the example of exponential mixture model Suppose Y N P9 Where P9 has density my meM lt1 lameW 1y gt 0 and 6 p Ana 6 01 x 0 00 X 0 00 Consider estimation of 6 based on Y1 Yn iid pew Solving the likelihood equation using the Newton Raphson is muCh computation involved 38 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION lnforn39iation Calculation in EM 0 Notation A as the score function for 6 in the full data lmislobs as the score for 6 in the conditional distribution of Ymis given Yobs i0 lomislobs loobs Varu39c VarEiClCb8 EVarz39cY0bs labs as the the score for 6 in the distribution of Yobs 42 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION 0 Information in the EM algorithm We obtain the following Louis formula 0bs6 Eijmislobs67 Eobs Thus the complete information is the summation of the observed information and the missing information One can even show When the EM converges the convergence linear rate denote as 6 5n6k approximates the l 0586nIC6n 43 CHAPTER 5 AJAXIMUlVI LIKELIHOOD ESTIAIATION Nonparan39ietric Maximum Likelihood Estimation a First example Let X1 Xn be iid random variables with common distribution F where F is any unknown distribution function The likelihood function for F is given by where f is the density function of F with respect to some dominating measure However the maximum of LnF does not exists 44 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION K Second example Suppose X1 Xn are iid F and Y1 Yn are iid G We observe iid pairs Z1 A1 Zn7 An Where Zi minXZgt Y1 and Ai Xi g Y1 We can think Xi as survival time and K as censoring time Then it is easy to calculate the joint distributions for Zi7 AL 239 l n is equal to ME G filfZ 1 CZ AZ 1 FZigzi1 A LnF G does not have the maximum so we consider an alternative function gmmu Gltz gtgtw 1 Fltz gtgtazi1Aa 1 46 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION a Third example Suppose T is survival time and Z is covariate Assume T Z has a conditional hazard function AtZ Ate 9TZ Then the likelihood function from n iid E Ziz39 l n is given by n w A H mm eprg Aawe za za i1 Note f is not informative about 6 and A so we can discard it from the likelihood function Again we replace 47 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION K Fourth example We consider X1 Xn are iid F and Y1 Yn are iid G We only observe Yi7 Ai Where Ai I Xi g Yi for 239 l n This data is one type of interval censored data or current status data The likelihood for the observations is n HFYiAi1 Fltmgt1 Aigm i1 n H 13le Pi1 Ai 7 i1 subject to the constraint that Z qi l and 0 g Pi g 1 increases with Y To derive the NPMLE for F and G we instead maximize 49 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION Summary of NPMLE The NPMLE is a generalization of the maximum likelihood estimation in the parametric model the semiparametric or nonparametric models 7 We replace the functional parameter by an empirical function With jumps only at observed data and maximize a modified likelihood function 7 Both computation of the NPMLE and the asymptotic property of the NPMLE can be difficult and vary for different specific problems 51 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION lternative Ef cient Estimation o One step ef cient estimation 7 start from a strongly consistent estimator for parameter 6 denoted by 6n assuming that Ian 90 OpalW One step procedure is a one step Newton Raphson iteration in solving the likelihood score equation n 5n 25 Where is the sore function and is the derivative of 1 we 52 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION a Result about the one step estimation Theorem 56 Let 9X be the log likelihood function of 6 Assume that there eXists a neighborhood of 60 suCh that in this neighborhood l 3X g FX with lt 00 Then x n 90 gtd N0J9017 Where 60 is the Fisher information 53 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION right hand side of the one step equation and obtain 61 a awn mm 26 n 90 Where 6 is between 5n and 60 gt en eo On the other hand by the condition that H g FX With lt 00 1 1 N gtas EUQO gtCL8 EUQO 5w wMWn D EwaX OMDhwo 1 awwtwow mo mmamwa Proof Since 5n gta8 60 we perform the Taylor expansion on the 54 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION o Sightly different one step estimation A an an 1 n 1z39 n o Other ef cient estimation the Bayesian estimation method posterior mode minimaX estimator etc 55 CHAPTER 5 AJAXIMUJVI LIKELIHOOD ESTIAIATION Conclusions 7 The maximum likelihood approach provides a natural and simple way of deriving an ef cient estimator Other estimation approaches are possible for ef cient estimation such as one step estimation Bayesian estimation etc Generalization from parametric models to semiparametric or nonparametric models How 56 CHAPTER 5 JMAXIMUJVI LIKELIHOOD ESTUVIATION READING MATERIALS Ferguson Sections 16 20 Lehmann and Ceselle Sections 62 67 57 Bios 760 Spring 2008 08222008 Density function of Y X2 where X N Nu 1 A proof of the formula from Lecture Notes page 10 below Corllary 11 Proposition Let X N No1 and Y X2 6 M2l Then My Ema2m lt2 1gt212gt k0 where pk62 exp76262kkl and gy 210 127 12 is the density of Gamma2k l212l Proof The CDF of Y can be computed as follows Since Y assumes nonnegative values only7 let y 2 0i Fyy PTY S y PTX2 S y PT7 S X S PTX S 7 PTX lt i PTX S 7 PTX S i X contl random variable Fx FX1 Differentiating with respect to y 7 we get the density function f ltgt 7 if ltfgtif lt m Y y 2W X y 2W X y 7l2112 1 Miami 2W wlwwl 2lt 3 1 expWm meg2 expwm expewm by using the power series Taylor expansion of the exponential function oo 7 expWmexpeyo 2 2WW 1 2 00 l 2 2k71 exp 2 expy2 gmo ky 2 1 expM22M2k 1 Eg246m2k72X2k 1350002k732k71 pkgml S XP 22M2k 1 16 1 w 7 I 21 I I I g r Z a eXP y2 by using Fl2 00 ex 7 2 2 k 1 2k 1 W 1 lt1 ylexplt1mgtl 3 2 M 211 7 H N21 2 ltby repeated use of Fz 1 zlquotz7 where z g m foo2m lt21 1gt212gt k0 CHAPTER 3 LARGE SAlVIPLE THEORY Introduction 0 Why large sample theory studying small sample property is usually dif cult and complicated large sample theory studies the limit behavior of a sequence of random variables say Xn example Xn gt u Q n u CHAPTER 3 LARGE SANIPLE THEORY Modes of Convergence a Convergence almost surely De nition 31 Xn is said to converge almost surely to X denoted by Xn gta8 X if there exists a set A C Q in real space such that PAC 0 and for each w E A Xnw gt Xw CHAPTER 3 LARGE SANIPLE THEORY 0 Equivalent condition 00 I Xnw gt Xwc U90 n w sugtp Xmw Xw gt 6 Xn ms X iff PSupXm X gt 6 gt0 mZn CHAPTER 3 LARGE SANIPLE THEORY a Convergence in probability De nition 32 Xn is said to converge in probability to X denoted by Xn gtp X if for every 6 gt 0 PXn X gt e gt 0 CHAPTER 3 LARGE SANIPLE THEORY a Convergence in moments means De nition 33 Xn is said to converge in rth mean to X denote by Xn gt7a X if EXn X7 gt 0 as n gt 00 for functions Xn7 X E LAP Where X E LTP means f X7quotdP lt oo CHAPTER 3 LARGE SANIPLE THEORY a Convergence in distribution De nition 34 Xn is said to converge in distribution of X denoted by Xn gtd X or Fn gtd F or LXn gt LX with L referring to the law or distribution if the distribution functions Fn and F of Xn and X satisfy gt as n gt 00 for each continuity point a of F 11 CHAPTER 3 LARGE SANIPLE THEORY a Uniform integrability De nition 35 A sequence of random variables Xn is uniformly integrable if Alirn lim sup EXnIXn 2 M 0 13 CHAPTER 3 LARGE SANIPLE THEORY A note 7 Convergence almost surely and convergence in probability are the same as we de ned in measure theory 7 Two new de nitions are gtxlt convergence in 7th mean gtxlt convergence in distribution 15 CHAPTER 3 LARGE SANIPLE THEORY a convergence in distribution 7 is very different from others 7 example a sequence X Y X Y X Y Where X and Y are N 0 1 the sequence converges in distribution 0 N 0 1 but the other modes do not hold convergence in distribution is important for asymptotic statistical inference 17 CHAPTER 3 LARGE SAlVIPLE THEORY a Relationship among different modes Theorem 31 A If Xn gta8 X then Xn gt X B If Xn gt X then Xnc gta8 X for some subsequenoe Xnk C Ian gt X then Xn gtp X D If Xn gt X and Xn is uniformly integrable then Xn gt X E If Xn gt X and limsuanXnT g EX then Xn gt X 19 CHAPTER 3 LARGE SANIPLE THEORY Proof A and B follow from the results in the measure theory Prove C Marlt30 inequality for any increasing function and random variable Y P Y gt e g Eggll Pan Xi gt e 3 MM H 0 25 CHAPTER 3 LARGE SANIPLE THEORY Prove D It is suf cient to show that for any subsequence of there exists a further subsequence Xnk such that Eank XVquot gt 0 For any subsequence of Xn from B there exists a further subsequence Xnk such that Xnk gtas X For any 6 there exists such that lim supnk EHXWVIQXWV Z lt e Particularly choose such that PH X 7quot 0 XnkVI XnkV 2 A em XVIQXV 2 A gt By the Fatou s Lemma EHerXr 2 a hmsupEHXnkranm 2 m lt 6 dc 27 CHAPTER 3 LARGE SAlVIPLE THEORY L C EHXnk XV g EHXW XVIQXWV lt 2A lXVquot lt 2m EHXW XVIQXWV 2 2A or lxv 2 2M g EHXW XVIQXWV lt 2A lXVquot lt 2m 2EltXnm Xmanm 2 2A or Wquot 2 2M Where the last inequality follows from the inequality 96 yquot S 2maxxy7quot S 2Wquot yrm 2 07y 2 0 When nk is large the second term is bounded by 2 27quot ElanleHankl Z Ml ElleTHle Z Ml S 2 lim suanHXnk XV g 2T1e 29 CHAPTER 3 LARGE SANIPLE THEORY Prove E It is suf cient to show that for any subsequence of there exists a further subsequence Xnk such that mum Xmeu For any subsequence of Xn there exists a further subsequence Xnk such that Xnk gta8 X De ne Ya2mxmvmvrampa XV20 gt By the Fatou s Lemma lim inf Ynk dP g liminfYnk dP TLk nk It is equivalent to 2T1EHXW S lirglbgHHTEHXnk V 2TEHXV EHXnk XVHM 31 CHAPTER 3 LARGE SANIPLE THEORY Prove F The Holder inequality fxgxdu fxpdux1p gxpdux1q i p q Choose u P f Xn XVy E 1 andprr q rr r in the Holder inequality gt EHXn XV g EHXn XWW a 0 CHAPTER 3 LARGE SAlVIPLE THEORY S Prove G Xn gtp X If PX x 0 then for any 6 gt 0 PlIXn gm IX xl gt6 PlIXn 3x IX xl gtelX xl gt6 PlIXn gm IX xl gtelX rl 6 PXngxXgtx6PXngtxXltr 6 PlX l 5 Pan Xl gt6PlX xl 36 The rst term converges to zero since Xn gtp X The second term can be arbitrarily small if 6 is small since lim50 PlX ml 3 6 PX IE 0 gt Xn g E gtp X g E gt F EIXn g gt EIX g 35 CHAPTER 3 LARGE SANIPLE THEORY Prove H One direction follows from B To prove the other direction use the contradiction Suppose there exists 6 gt 0 such that P Xn X gt 6 does not converge to zero gt nd a subsequence an such hat P Xn X gt e gt 6 for some 6 gt 0 However by the condition there exists a further subsequence Xnn such that Xnn gta8 X then Xnn gtp X from A Contradiction 37
Are you sure you want to buy this material for
You're already Subscribed!
Looks like you've already subscribed to StudySoup, you won't need to purchase another subscription to get this material. To access this material simply click 'View Full Document'