In Exercise 8.8, we considered a random sample of size 3 from an exponential distribution with density function given by f (y) = $ (1/)ey/ , 0 < y, 0, elsewhere, and determined that 1 = Y1, 2 = (Y1 +Y2)/2, 3 = (Y1 +2Y2)/3, and 5 = Y are all unbiased estimators for . Find the efficiency of 1 relative to 5, of 2 relative to 5, and of 3 relative to 5.
Read more- Statistics / Mathematical Statistics with Applications 7 / Chapter 9 / Problem 9.31
Table of Contents
Textbook Solutions for Mathematical Statistics with Applications
Question
If Y1, Y2,..., Yn denote a random sample from a gamma distribution with parameters and , show that Y converges in probability to some constant and find the constant.
Solution
The first step in solving 9 problem number 31 trying to solve the problem we have to refer to the textbook question: If Y1, Y2,..., Yn denote a random sample from a gamma distribution with parameters and , show that Y converges in probability to some constant and find the constant.
From the textbook chapter Properties of Point Estimators and Methods of Estimation you will find a few key concepts needed to solve this.
Visible to paid subscribers only
Step 3 of 7)Visible to paid subscribers only
full solution
If Y1, Y2,..., Yn denote a random sample from a gamma distribution with parameters and
Chapter 9 textbook questions
-
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
-
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote a random sample from a population with mean and variance 2. Consider the following three estimators for : 1 = 1 2 (Y1 + Y2), 2 = 1 4 Y1 + Y2 ++ Yn1 2(n 2) + 1 4 Yn , 3 = Y . a Show that each of the three estimators is unbiased. b Find the efficiency of 3 relative to 2 and 1, respectively.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote a random sample from the uniform distribution on the interval (, + 1). Let 1 = Y 1 2 and 2 = Y(n) n n + 1 . a Show that both 1 and 2 are unbiased estimators of . b Find the efficiency of 1 relative to 2.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote a random sample of size n from a uniform distribution on the interval (0, ). If Y(1) = min(Y1, Y2,..., Yn ), the result of Exercise 8.18 is that 1 = (n + 1)Y(1) is an unbiased estimator for . If Y(n) = max(Y1, Y2,..., Yn ), the results of Example 9.1 imply that 2 = [(n + 1)/n]Y(n) is another unbiased estimator for . Show that the efficiency of 1 to 2 is 1/n2. Notice that this implies that 2 is a markedly superior estimator.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Suppose that Y1, Y2,..., Yn is a random sample from a normal distribution with mean and variance 2. Two unbiased estimators of 2 are 2 1 = S2 = 1 n 1 n i=1 (Yi Y ) 2 and 2 2 = 1 2 (Y1 Y2) 2 . Find the efficiency of 2 1 relative to 2 2 .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Suppose that Y1, Y2,..., Yn denote a random sample of size n from a Poisson distribution with mean . Consider 1 = (Y1 + Y2)/2 and 2 = Y . Derive the efficiency of 1 relative to 2.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Suppose that Y1, Y2,..., Yn denote a random sample of size n from an exponential distribution with density function given by f (y) = $ (1/)ey/ , 0 < y, 0, elsewhere.In Exercise 8.19, we determined that 1 = nY(1) is an unbiased estimator of with MSE( 1) = 2. Consider the estimator 2 = Y and find the efficiency of 1 relative to 2.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote a random sample from a probability density function f (y), which has unknown parameter . If is an unbiased estimator of , then under very general conditions V() I(), where I() = nE 2 ln f (Y ) 2 1 . (This is known as the CramerRao inequality.) If V() = I(), the estimator is said to be efficient. 1 a Suppose that f (y) is the normal density with mean and variance 2. Show that Y is an efficient estimator of . b This inequality also holds for discrete probability functions p(y). Suppose that p(y) is the Poisson probability function with mean . Show that Y is an efficient estimator of .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote a random sample from a probability density function f (y), which has unknown parameter . If is an unbiased estimator of , then under very general conditions V() I(), where I() = nE 2 ln f (Y ) 2 1 . (This is known as the CramerRao inequality.) If V() = I(), the estimator is said to be efficient. 1 a Suppose that f (y) is the normal density with mean and variance 2. Show that Y is an efficient estimator of . b This inequality also holds for discrete probability functions p(y). Suppose that p(y) is the Poisson probability function with mean . Show that Y is an efficient estimator of .times. How many trials n have you simulated? What value of p n did you observe? Is the value close to .5, the true value of p? Is the graph a flat horizontal line? Why or why not? c Click the button 100 Trials a single time. What do you observe? Click the button 100 Trials repeatedly until the total number of trials is 1000. Is the graph that you obtained identical to the one given in Figure 9.1? In what sense is it similar to the graph in Figure 9.1? d Based on the sample of size 1000, what is the value of p 1000? Is this value what you expected to observe? e Click the button Reset. Click the button 100 Trials ten times to generate another sequence of values for p. Comment.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Applet Exercise Refer to Exercises 9.9 and 9.10. How can the results of several sequences of Bernoulli trials be simultaneously plotted? Access the applet PointbyPoint. Scroll down until you can view all six buttons under the top graph. a Do not change the value of p from the preset value p = .5. Click the button One Trial a few times to verify that you are obtaining a result similar to those obtained in Exercise 9.9. Click the button 5 Trials until you have generated a total of 50 trials. What is the value of p 50 that you obtained at the end of this first sequence of 50 trials? b Click the button New Sequence. The color of your initial graph changes from red to green. Click the button 5 Trials a few times. What do you observe? Is the graph the same as the one you observed in part (a)? In what sense is it similar? c Click the button New Sequence. Generate a new sequence of 50 trials. Repeat until you have generated five sequences. Are the paths generated by the five sequences identical? In what sense are they similar?
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Applet Exercise Refer to Exercise 9.11. What happens if each sequence is longer? Scroll down to the portion of the screen labeled Longer Sequences of Trials. a Repeat the instructions in parts (a)(c) of Exercise 9.11. b What do you expect to happen if p is not 0.5? Use the button in the lower right corner to change to value of p. Generate several sequences of trials. Comment.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Applet Exercise Refer to Exercises 9.99.12. Access the applet Point Estimation. a Chose a value for p. Click the button New Sequence repeatedly. What do you observe? b Scroll down to the portion of the applet labeled More Trials. Choose a value for p and click the button New Sequence repeatedly. You will obtain up to 50 sequences, each based on 1000 trials. How does the variability among the estimates change as a function of the sample size? How is this manifested in the display that you obtained?
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Applet Exercise Refer to Exercise 9.13. Scroll down to the portion of the applet labeled Mean of Normal Data. Successive observed values of a standard normal random variable can be generated and used to compute the value of the sample mean Y n . These successive values are then plotted versus the respective sample size to obtain one sample path.a Do you expect the values of Y n to cluster around any particular value? What value? b If the results of 50 sample paths are plotted, how do you expect the variability of the estimates to change as a function of sample size? c Click the button New Sequence several times. Did you observe what you expected based on your answers to parts (a) and (b)?
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Refer to Exercise 9.3. Show that both 1 and 2 are consistent estimators for .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Refer to Exercise 9.5. Is 2 2 a consistent estimator of 2?
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Suppose that X1, X2,..., Xn and Y1, Y2,..., Yn are independent random samples from populations with means 1 and 2 and variances 2 1 and 2 2 , respectively. Show that X Y is a consistent estimator of 1 2.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
In Exercise 9.17, suppose that the populations are normally distributed with 2 1 = 2 2 = 2. Show that n i=1(Xi X)2 + n i=1(Yi Y )2 2n 2 is a consistent estimator of 2.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote a random sample from the probability density function f (y) = $ y1, 0 < y < 1, 0, elsewhere, where > 0. Show that Y is a consistent estimator of /( + 1)
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn be a random sample of size n from a normal population with mean and variance 2. Assuming that n = 2k for some integer k, one possible estimator for 2 is given by 2 = 1 2k k i=1 (Y2i Y2i1) 2 . a Show that 2 is an unbiased estimator for 2. b Show that 2 is a consistent estimator for 2
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Refer to Exercise 9.21. Suppose that Y1, Y2,..., Yn is a random sample of size n from a Poisson-distributed population with mean . Again, assume that n = 2k for some integer k. Consider = 1 2k k i=1 (Y2i Y2i1) 2 . a Show that is an unbiased estimator for . b Show that is a consistent estimator for
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Refer to Exercise 9.21. Suppose that Y1, Y2,..., Yn is a random sample of size n from a population for which the first four moments are finite. That is, m 1 = E(Y1) < , m 2 = E(Y 2 1 ) < , m 3 = E(Y 3 1 ) < , and m 4 = E(Y 4 1 ) < . (Note: This assumption is valid for the normal and Poisson distributions in Exercises 9.21 and 9.22, respectively.) Again, assume that n = 2k for some integer k. Consider 2 = 1 2k k i=1 (Y2i Y2i1) 2 . a Show that 2 is an unbiased estimator for 2. b Show that 2 is a consistent estimator for 2. c Why did you need the assumption that m 4 = E(Y 4 1 ) < ?
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2, Y3,... Yn be independent standard normal random variables. a What is the distribution of n i=1 Y 2 i ? b Let Wn = 1 n n i=1 Y 2 i . Does Wn converge in probability to some constant? If so, what is the value of the constant?
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Suppose that Y1, Y2,..., Yn denote a random sample of size n from a normal distribution with mean and variance 1. Consider the first observation Y1 as an estimator for . a Show that Y1 is an unbiased estimator for . b Find P(|Y1 | 1). c Look at the basic definition of consistency given in Definition 9.2. Based on the result of part (b), is Y1 a consistent estimator for ?
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
It is sometimes relatively easy to establish consistency or lack of consistency by appealing directly to Definition 9.2, evaluating P(| n | ) directly, and then showing that limn P(| n | ) = 1. Let Y1, Y2,..., Yn denote a random sample of size n from a uniform distribution on the interval (0, ). If Y(n) = max(Y1, Y2,..., Yn ), we showed in Exercise 6.74 that the probability distribution function of Y(n) is given by F(n)(y) = 0, y < 0, (y/)n , 0 y , 1, y > . a For each n 1 and every > 0, it follows that P(|Y(n) | ) = P( Y(n) + ). If >, verify that P( Y(n) + ) = 1 and that, for every positive <, we obtain P( Y(n) + ) = 1 [( )/] n . b Using the result from part (a), show that Y(n) is a consistent estimator for by showing that, for every > 0, limn P(|Y(n) | ) = 1.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Use the method described in Exercise 9.26 to show that, if Y(1) = min(Y1, Y2,..., Yn ) when Y1, Y2,..., Yn are independent uniform random variables on the interval(0, ), then Y(1) is not a consistent estimator for . [Hint: Based on the methods of Section 6.7, Y(1) has the distribution function F(1)(y) = 0, y < 0, 1 (1 y/)n , 0 y , 1, y > .]
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote a random sample of size n from a Pareto distribution (see Exercise 6.18). Then the methods of Section 6.7 imply that Y(1) = min(Y1, Y2,..., Yn ) has the distribution function given by F(1)(y) = 0, y , 1 (/y)n , y > . Use the method described in Exercise 9.26 to show that Y(1) is a consistent estimator of .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote a random sample of size n from a power family distribution (see Exercise 6.17). Then the methods of Section 6.7 imply that Y(n) = max(Y1, Y2,..., Yn ) has the distribution function given by F(n)(y) = 0, y < 0, (y/)n , 0 y , 1, y > . Use the method described in Exercise 9.26 to show that Y(n) is a consistent estimator of .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
If Y1, Y2,..., Yn denote a random sample from a gamma distribution with parameters and , show that Y converges in probability to some constant and find the constant.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote a random sample from the probability density function f (y) = 2 y2 , y 2, 0, elsewhere. Does the law of large numbers apply to Y in this case? Why or why not?
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
An experimenter wishes to compare the numbers of bacteria of types A and B in samples of water. A total of n independent water samples are taken, and counts are made for each sample. Let Xi denote the number of type A bacteria and Yi denote the number of type B bacteria for sample i. Assume that the two bacteria types are sparsely distributed within a water sample so that X1, X2,..., Xn and Y1, Y2,..., Yn can be considered independent random samples from Poisson distributions with means 1 and 2, respectively. Suggest an estimator of 1/(1 +2). What properties does your estimator have?
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
The Rayleigh density function is given by f (y) = 2y ey2/ , y > 0, 0, elsewhere. In Exercise 6.34(a), you established that Y 2 has an exponential distribution with mean . If Y1, Y2,..., Yn denote a random sample from a Rayleigh distribution, show that Wn = 1 n n i=1 Y 2 i is a consistent estimator for
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,... be a sequence of random variables with E(Yi) = and V(Yi) = 2 i . Notice that the 2 i s are not all equal. a What is E(Y n )? b What is V(Y n )? c Under what condition (on the 2 i s) can Theorem 9.1 be applied to show that Y n is a consistent estimator for ?
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Suppose that Y has a binomial distribution based on n trials and success probability p. Then p n = Y/n is an unbiased estimator of p. Use Theorem 9.3 to prove that the distribution of (p n p)/p nqn /n converges to a standard normal distribution. [Hint: Write Y as we did in Section 7.5.]
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let X1, X2,..., Xn denote n independent and identically distributed Bernoulli random variables such that P(Xi = 1) = p and P(Xi = 0) = 1 p, for each i = 1, 2,..., n. Show that n i=1 Xi is sufficient for p by using the factorization criterion given in Theorem 9.4.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1 , Y2,..., Yn denote a random sample from a normal distribution with mean and variance 2. a If is unknown and 2 is known, show that Y is sufficient for . b If is known and 2 is unknown, show that n i=1(Yi )2 is sufficient for 2. c If and 2 are both unknown, show that n i=1 Yi and n i=1 Y 2 i are jointly sufficient for and 2. [Thus, it follows that Y and n i=1(Yi Y )2 or Y and S2 are also jointly sufficient for and 2.]
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote a random sample from a Poisson distribution with parameter . Show by conditioning that n i=1 Yi is sufficient for
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote a random sample from a Weibull distribution with known m and unknown . (Refer to Exercise 6.26.) Show that n i=1 Y m i is sufficient for .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
If Y1, Y2,..., Yn denote a random sample from a geometric distribution with parameter p, show that Y is sufficient for p.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote independent and identically distributed random variables from a power family distribution with parameters and . Then, by the result in Exercise 6.17, if , > 0, f (y | , ) = $ y1/ , 0 y , 0, elsewhere. If is known, show that 3n i=1 Yi is sufficient for .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote independent and identically distributed random variables from a Pareto distribution with parameters and . Then, by the result in Exercise 6.18, if , > 0, f (y | , ) = $ y(+1) , y , 0, elsewhere. If is known, show that 3n i=1 Yi is sufficient for
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Suppose that Y1, Y2,..., Yn is a random sample from a probability density function in the (one-parameter) exponential family so that f (y | ) = $ a()b(y)e[c()d(y)] , a y b, 0, elsewhere, where a and b do not depend on . Show that n i=1 d(Yi) is sufficient for .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
If Y1, Y2,..., Yn denote a random sample from an exponential distribution with mean , show that f (y | ) is in the exponential family and that Y is sufficient for
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Refer to Exercise 9.43. If is known, show that the power family of distributions is in the exponential family. What is a sufficient statistic for ? Does this contradict your answer to Exercise 9.43?
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Refer to Exercise 9.44. If is known, show that the Pareto distribution is in the exponential family. What is a sufficient statistic for ? Argue that there is no contradiction between your answer to this exercise and the answer you found in Exercise 9.44.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote a random sample from the uniform distribution over the interval (0,). Show that Y(n) = max(Y1, Y2,..., Yn ) is sufficient for .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote a random sample from the probability density function f (y | ) = $ e(y), y , 0, elsewhere. Show that Y(1) = min(Y1, Y2,..., Yn ) is sufficient for .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn be a random sample from a population with density function f (y | ) = 3y2 3 , 0 y , 0, elsewhere. Show that Y(n) = max(Y1, Y2,..., Yn ) is sufficient for .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn be a random sample from a population with density function f (y | ) = 2 2 y3 , < y < , 0, elsewhere. Show that Y(1) = min(Y1, Y2,..., Yn ) is sufficient for .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote independent and identically distributed random variables from a power family distribution with parameters and . Then, as in Exercise 9.43, if , > 0, f (y | , ) = $ y1/ , 0 y , 0, elsewhere. Show that max(Y1, Y2,..., Yn ) and 3n i=1 Yi are jointly sufficient for and .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote independent and identically distributed random variables from a Pareto distribution with parameters and . Then, as in Exercise 9.44, if , > 0, f (y | , ) = $ y(+1) , y , 0, elsewhere. Show that 3n i=1 Yi and min(Y1, Y2,..., Yn ) are jointly sufficient for and .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Refer to Exercise 9.38(b). Find an MVUE of 2
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Refer to Exercise 9.18. Is the estimator of 2 given there an MVUE of 2?
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Refer to Exercise 9.40. Use n i=1 Y 2 i to find an MVUE of .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
The number of breakdowns Y per day for a certain machine is a Poisson random variable with mean . The daily cost of repairing these breakdowns is given by C = 3Y 2. If Y1, Y2,..., Yn denote the observed number of breakdowns for n independently selected days, find an MVUE for E(C).
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Refer to Exercise 9.49. Use Y(n) to find an MVUE of . (See Example 9.1.)
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Refer to Exercise 9.51. Find a function of Y(1) that is an MVUE for .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Exercises 471 In Exercise 9.52 you showed that Y(n) = max(Y In Exercise 9.52 you showed that Y(n) = max(Y1, Y2,..., Yn ) is sufficient for . a Show that Y(n) has probability density function f(n)(y | ) = 3ny3n1 3n , 0 y , 0, elsewhere. b Find the MVUE of
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn be a random sample from a normal distribution with mean and variance 1. a Show that the MVUE of 2 is 42 = Y 2 1/n. b Derive the variance of 42.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
In this exercise, we illustrate the direct use of the RaoBlackwell theorem. Let Y1, Y2,..., Yn be independent Bernoulli random variables with p(yi | p) = pyi(1 p) 1yi, yi = 0, 1. That is, P(Yi = 1) = p and P(Yi = 0) = 1 p. Find the MVUE of p(1 p), which is a term in the variance of Yi or W = n i=1 Yi , by the following steps. a Let T = 1, if Y1 = 1 and Y2 = 0, 0, otherwise. Show that E(T ) = p(1 p). b Show that P(T = 1 | W = w) = w(n w) n(n 1) . c Show that E(T | W) = n n 1 W n 1 W n = n n 1 Y (1 Y ) and hence that nY (1 Y )/(n 1) is the MVUE of p(1 p).
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
The likelihood function L(y1, y2,..., yn | ) takes on different values depending on the arguments (y1, y2 ,..., yn ). A method for deriving a minimal sufficient statistic developed by Lehmann and Scheffe uses the ratio of the likelihoods evaluated at two points, (x1, x2,..., xn ) and (y1, y2,..., yn ): L(x1, x2,..., xn | ) L(y1, y2,..., yn | ).Many times it is possible to find a function g(x1, x2,..., xn ) such that this ratio is free of the unknown parameter if and only if g(x1, x2,..., xn ) = g(y1, y2,..., yn ). If such a function g can be found, then g(Y1, Y2,..., Yn ) is a minimal sufficient statistic for .a Let Y1, Y2,..., Yn be a random sample from a Bernoulli distribution (see Example 9.6 and Exercise 9.65) with p unknown. i Show that ii Argue that for this ratio to be independent of p, we must have n i=1 xi n i=1 yi = 0 or n i=1 xi = n i=1 yi . iii Using the method of Lehmann and Scheffe, what is a minimal sufficient statistic for p? How does this sufficient statistic compare to the sufficient statistic derived in Example 9.6 by using the factorization criterion? b Consider the Weibull density discussed in Example 9.7. i Show that L(x1, x2,..., xn | ) L(y1, y2,..., yn | ) = x1x2 xn y1 y2 yn exp 1 n i=1 x 2 i n i=1 y2 i . ii Argue that n i=1 Y 2 i is a minimal sufficient statistic for .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Refer to Exercise 9.66. Suppose that a sample of size n is taken from a normal population with mean and variance 2. Show that n i=1 Yi , and n i=1 Y 2 i jointly form minimal sufficient statistics for and 2.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Suppose that a statistic U has a probability density function that is positive over the interval a u b and suppose that the density depends on a parameter that can range over the interval 1 2. Suppose also that g(u) is continuous for u in the interval [a, b]. If E[g(U)| ] = 0 for all in the interval [1, 2] implies that g(u) is identically zero, then the family of density functions { fU (u | ), 1 2} is said to be complete. (All statistics that we employed in Section 9.5 have complete families of density functions.) Suppose that U is a sufficient statistic for , and g1(U) and g2(U) are both unbiased estimators of . Show that, if the family of density functions for U is complete, g1(U) must equal g2(U), and thus there is a unique function of U that is an unbiased estimator of . Coupled with the RaoBlackwell theorem, the property of completeness of fU (u | ), along with the sufficiency of U, assures us that there is a unique minimum-variance unbiased estimator (UMVUE) of .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote a random sample from the probability density function f (y | ) = $ ( + 1)y , 0 < y < 1; > 1, 0, elsewhere. Find an estimator for by the method of moments. Show that the estimator is consistent. Is the estimator a function of the sufficient statistic n i=1 ln(Yi) that we can obtain from the factorization criterion? What implications does this have?
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
If Y1, Y2,..., Yn denote a random sample from the normal distribution with known mean = 0 and unknown variance 2, find the method-of-moments estimator of 2.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
If Y1, Y2,..., Yn denote a random sample from the normal distribution with mean and variance 2, find the method-of-moments estimators of and 2.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
An urn contains black balls and N white balls. A sample of n balls is to be selected without replacement. Let Y denote the number of black balls in the sample. Show that (N/n)Y is the method-of-moments estimator of
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn constitute a random sample from the probability density function given by f (y | ) = 2 2 ( y), 0 y , 0, elsewhere. a Find an estimator for by using the method of moments. b Is this estimator a sufficient statistic for ? 9
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn constitute a random sample from the probability density function given by f (y | ) = 2 2 ( y), 0 y , 0, elsewhere. a Find an estimator for by using the method of moments. b Is this estimator a sufficient statistic for ? 9
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let X1, X2, X3, ... be independent Bernoulli random variables such that P(Xi = 1) = p and P(Xi = 0) = 1 p for each i = 1, 2, 3,... . Let the random variable Y denote the number of trials necessary to obtain the first successthat is, the value of i for which Xi = 1 first occurs. Then Y has a geometric distribution with P(Y = y) = (1 p)y1 p, for y = 1, 2, 3,... . Find the method-of-moments estimator of p based on this single observation Y .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote independent and identically distributed uniform random variables on the interval (0, 3). Derive the method-of-moments estimator for .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote independent and identically distributed random variables from a power family distribution with parameters and = 3. Then, as in Exercise 9.43, if > 0, f (y|) = $ y1/3, 0 y 3, 0, elsewhere. Show that E(Y1) = 3/( + 1) and derive the method-of-moments estimator for .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote independent and identically distributed random variables from a Pareto distribution with parameters and , where is known. Then, if > 0, f (y|, ) = $ y(+1) , y , 0, elsewhere. Show that E(Yi) = /( 1) if > 1 and E(Yi) is undefined if 0 << 1. Thus, the method-of-moments estimator for is undefined.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Suppose that Y1, Y2,..., Yn denote a random sample from an exponentially distributed population with mean . Find the MLE of the population variance 2. [Hint: Recall Example 9.9.]
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote a random sample from the density function given by f (y | ) = 1 r yr1 eyr / , > 0, y > 0, 0, elsewhere, where r is a known positive constant. a Find a sufficient statistic for . b Find the MLE of . c Is the estimator in part (b) an MVUE for ?
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Suppose that Y1, Y2,..., Yn constitute a random sample from a uniform distribution with probability density function f (y | ) = 1 2 + 1 , 0 y 2 + 1, 0, otherwise. a Obtain the MLE of . b Obtain the MLE for the variance of the underlying distribution.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Suppose that Y1, Y2,..., Yn constitute a random sample from a uniform distribution with probability density function f (y | ) = 1 2 + 1 , 0 y 2 + 1, 0, otherwise. a Obtain the MLE of . b Obtain the MLE for the variance of the underlying distribution.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote a random sample from the density function given by f (y | , ) = 1 () y1 ey/ , y > 0, 0, elsewhere, where > 0 is known. a Find the MLE of . b Find the expected value and variance of . c Show that is consistent for . d What is the best (minimal) sufficient statistic for in this problem? e Suppose that n = 5 and = 2. Use the minimal sufficient statistic to construct a 90% confidence interval for . [Hint: Transform to a 2 distribution.]
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Suppose that X1, X2,..., Xm, representing yields per acre for corn variety A, constitute a random sample from a normal distribution with mean 1 and variance 2. Also, Y1 Y2,..., Yn , representing yields for corn variety B, constitute a random sample from a normal distribution with mean 2 and variance 2. If the Xs and Y s are independent, find the MLE for the common variance 2. Assume that 1 and 2 are unknown.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
A random sample of 100 voters selected from a large population revealed 30 favoring candidate A, 38 favoring candidate B, and 32 favoring candidate C. Find MLEs for the proportions of voters in the population favoring candidates A, B, and C, respectively. Estimate the difference between the fractions favoring A and B and place a 2-standard-deviation bound on the error of estimation.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote a random sample from the probability density function f (y | ) = $ ( + 1)y , 0 < y < 1,> 1, 0, elsewhere. Find the MLE for . Compare your answer to the method-of-moments estimator found in Exercise 9.69.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
It is known that the probability p of tossing heads on an unbalanced coin is either 1/4 or 3/4. The coin is tossed twice and a value for Y , the number of heads, is observed. For each possible value of Y , which of the two values for p (1/4 or 3/4) maximizes the probability that Y = y? Depending on the value of y actually observed, what is the MLE of p?
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Find the MLE of based on a random sample of size n from a uniform distribution on the interval (0, 2).
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Find the MLE of based on a random sample of size n from a uniform distribution on the interval (0, 2).
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn be a random sample from a population with density function f (y | ) = 2 2 y3 , < y < , 0, elsewhere. In Exercise 9.53, you showed that Y(1) = min(Y1, Y2,..., Yn ) is sufficient for . a Find the MLE for . [Hint: See Example 9.16.] b Find a function of the MLE in part (a) that is a pivotal quantity. c Use the pivotal quantity from part (b) to find a 100(1 )% confidence interval for .
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Suppose that is the MLE for a parameter . Let t() be a function of that possesses a unique inverse [that is, if = t(), then = t1()]. Show that t() is the MLE of t().
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
A random sample of n items is selected from the large number of items produced by a certain production line in one day. Find the MLE of the ratio R, the proportion of defective items divided by the proportion of good items.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Consider a random sample of size n from a normal population with mean and variance 2, both unknown. Derive the MLE of
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
The geometric probability mass function is given by p(y | p) = p(1 p)y1 , y = 1, 2, 3,.... A random sample of size n is taken from a population with a geometric distribution. a Find the method-of-moments estimator for p. b Find the MLE for p.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Refer to Exercise 9.97. What is the approximate variance of the MLE?
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Consider the distribution discussed in Example 9.18. Use the method presented in Section 9.8 to derive a 100(1 )% confidence interval for t(p) = p. Is the resulting interval familiar to you?
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote a random sample of size n from a Poisson distribution with mean . Find a 100(1 )% confidence interval for t() = e = P(Y = 0).
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Refer to Exercises 9.97 and 9.98. If a sample of size 30 yields y = 4.4, find a 95% confidence interval for p.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
A random sample of size n is taken from a population with a Rayleigh distribution. As in Exercise 9.34, the Rayleigh density function is f (y) = 2y ey2/ , y > 0, 0, elsewhere. a Find the MLE of . *b Find the approximate variance of the MLE obtained in part (a).
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Suppose that Y1, Y2,..., Yn constitute a random sample from the density function f (y | ) = $ e(y), y > , 0, elsewhere where is an unknown, positive constant. a Find an estimator 1 for by the method of moments. b Find an estimator 2 for by the method of maximum likelihood. c Adjust 1 and 2 so that they are unbiased. Find the efficiency of the adjusted 1 relative to the adjusted 2.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Refer to Exercise 9.38(b). Under the conditions outlined there, find the MLE of 2.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Suppose that Y1, Y2,..., Yn denote a random sample from a Poisson distribution with mean . Find the MVUE of P(Yi = 0) = e. [Hint: Make use of the RaoBlackwell theorem.]
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Suppose that a random sample of length-of-life measurements, Y1, Y2,..., Yn , is to be taken of components whose length of life has an exponential distribution with mean . It is frequently of interest to estimate F(t) = 1 F(t) = et/ , the reliability at time t of such a component. For any fixed value of t, find the MLE of F(t).
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Suppose that a random sample of length-of-life measurements, Y1, Y2,..., Yn , is to be taken of components whose length of life has an exponential distribution with mean . It is frequently of interest to estimate F(t) = 1 F(t) = et/ , the reliability at time t of such a component. For any fixed value of t, find the MLE of F(t).This is the MVUE of et/ by the RaoBlackwell theorem and by the fact that the density function for U is complete.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Suppose that n integers are drawn at random andwith replacementfrom the integers 1, 2,..., N. That is, each sampled integer has probability 1/N of taking on any of the values 1, 2,..., N, and the sampled values are independent. a Find the method-of-moments estimator N 1 of N. b Find E(N 1) and V(N 1).
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Refer to Exercise 9.110. Suppose that enemy tanks have serial numbers 1, 2,..., N. A spy randomly observed five tanks (with replacement) with serial numbers 97, 64, 118, 210, and 57. Estimate N and place a bound on the error of estimation.
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Let Y1, Y2,..., Yn denote a random sample from a Poisson distribution with mean and define Wn = Y / Y /n . a Show that the distribution of Wn converges to a standard normal distribution. b Use Wn and the result in part (a) to derive the formula for an approximate 95% confidence interval for .
Read more