In Exercise , we considered a random sample of size 3 from an exponential distribution with density function given by \(f(y)=\left\{\begin{array}{ll} (1 / \theta) e^{-y / \theta}, & 0<y \\ 0, & \text { elsewhere } \end{array}\right.\) and determined that \(\widehat{\theta}_{1}=Y_{1}, \widehat{\theta}_{2}=\left(Y_{1}+Y_{2}\right) / 2, \widehat{\theta}_{3}=\left(Y_{1}+2 Y_{2}\right) / 3\), and \(\widehat{\theta}_{5}=\bar{Y}\) are all unbiased estimators for \(\theta\). Find the efficiency of \(\widehat{\theta}_{1}\) relative to \(\widehat{\theta}_{5}\), of \(\widehat{\theta}_{2}\) relative to \(\widehat{\theta}_{51}\), and of \(\widehat{\theta}_{3}\)relative to \(\widehat{\theta}_{5}\). Equation Transcription: { Text Transcription: \(f(y)={ (1 / \theta) e^-y / \theta, & 0<y \ 0, & elsewhere \widehat\theta_1=Y_1, \widehat{\theta_2=\left(Y_1+Y_2\) / 2, \wideha{\theta_3=\left(Y_1+2 Y_2 / 3 \widehat\theta_5=\bar Y \theta \widehat\theta_1 \widehat\theta_5 \widehat\theta_2 \widehat\theta_5 \widehat\theta_3 \widehat\theta_5
Read more- Statistics / Mathematical Statistics with Applications 7 / Chapter 9 / Problem 44E
Table of Contents
Textbook Solutions for Mathematical Statistics with Applications
Question
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote independent and identically distributed random variables from a Pareto distribution with parameters \(\alpha \text { and } \beta\). Then, by the result in Exercise 6.18, if \(\alpha, \beta>0\),
\(f(y \mid \alpha, \beta)=\left\{\begin{array}{ll} \alpha \beta^{\alpha} y^{-(\alpha+1)}, & y \geq \beta \\ 0, & \text { elsewhere } \end{array}\right.\)
If \(\beta\) is known, show that \(\prod_{i=1}^{n} Y_{i}\) is sufficient for \(\alpha\)
Solution
The first step in solving 9 problem number 44 trying to solve the problem we have to refer to the textbook question: Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote independent and identically distributed random variables from a Pareto distribution with parameters \(\alpha \text { and } \beta\). Then, by the result in Exercise 6.18, if \(\alpha, \beta>0\),\(f(y \mid \alpha, \beta)=\left\{\begin{array}{ll} \alpha \beta^{\alpha} y^{-(\alpha+1)}, & y \geq \beta \\ 0, & \text { elsewhere } \end{array}\right.\)If \(\beta\) is known, show that \(\prod_{i=1}^{n} Y_{i}\) is sufficient for \(\alpha\)
From the textbook chapter Properties of Point Estimators and Methods of Estimation you will find a few key concepts needed to solve this.
Visible to paid subscribers only
Step 3 of 7)Visible to paid subscribers only
full solution
Let Y1, Y2, . . . , Yn denote independent and identically
Chapter 9 textbook questions
-
Chapter 9: Problem 1 Mathematical Statistics with Applications 7
-
Chapter 9: Problem 3 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample from the uniform distribution on the interval \((\theta, \theta+1)\). Let \(\hat{\theta}_{1}=\bar{Y}-\frac{1}{2}\) and \(\hat{\theta}_{2}=Y_{(n)}-\frac{n}{n+1}\) a Show that both \(\hat{\theta}_{1}\) and \(\hat{\theta}_{2}\) are unbiased estimators of \(\theta\). b Find the efficiency of \(\hat{\theta}_{1}\) relative to \(\hat{\theta}_{2}\). Equation Transcription: = - = Text Transcription: Y_1, Y_2,..., Y_n (theta, theta + 1) hat{theta}_1 = bar{Y} - frac{1}{2} hat{theta}_1 hat{theta}_2 hat{theta}_2 =Y_{(n)} - frac{n}{n+1} theta hat{theta}_1 hat{theta}_2
Read more -
Chapter 9: Problem 2 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample from a population with mean \(\mu\) and variance \(\sigma^{2}\). Consider the following three estimators for \(\mu\): \(\hat{\mu}_{1}=\frac{1}{2}\left(Y_{1}+Y_{2}\right), \quad \hat{\mu}_{2}=\frac{1}{4} Y_{1}+\frac{Y_{2}+\cdots+Y_{n-1}}{2(n-2)}+\frac{1}{4} Y_{n}, \quad \hat{\mu}_{3}=\bar{Y}\) a Show that each of the three estimators is unbiased. b Find the efficiency of \(\hat{\mu}_{3}\) relative to \(\hat{\mu}_{2}\) and \(\hat{\mu}_{1}\), respectively. Equation Transcription: = , = = Text Transcription: Y_{1}, Y_{2}, ..., Y_{n} mu sigma^2 mu hat{mu}_{1} = frac{1}{2} (Y_1 +Y_2), quad hat{mu}_2 = frac{1}{4} Y_{1} + frac{Y_{2}+ ... +Y_{n-1}}{2(n-2)} + frac{1}{4} Y_ n, quad hat{mu}_{3} = bar{Y} hat{mu}_{3} hat{mu}_{2} hat{mu}_{1}
Read more -
Chapter 9: Problem 112 Mathematical Statistics with Applications 7
Let denote a random sample from a Poisson distribution with mean \(\lambda\) and define \(W_{n}=\frac{\bar{Y}-\lambda}{\sqrt{\bar{Y} / n}}\) a. Show that the distribution of \(W_{n}\) converges to a standard normal distribution. b. Use \(W_{n}\) and the result in part (a) to derive the formula for an approximate \(95 \%\) confidence interval for \(\lambda\). Equation Transcription: Text Transcription: Y1, Y2,...,Yn \lambda W_n=\frac\bar Y-\lambda \sqrt\bar Y / n W_n W_n 95% \lambda
Read more -
Chapter 9: Problem 5 Mathematical Statistics with Applications 7
Suppose that \(Y_{1}, Y_{2}, \ldots, Y_{n}\) is a random sample from a normal distribution with mean \(\mu\) and variance \(\sigma^{2}\). Two unbiased estimators of \(\sigma^{2}\) are \(\hat{\sigma}_{1}^{2}=S^{2}=\frac{1}{n-1} \sum_{i=1}^{n}\left(Y_{i}-\bar{Y}\right)^{2}\) and \(\hat{\sigma}_{2}^{2}=\frac{1}{2}\left(Y_{1}-Y_{2}\right)^{2}\) Find the efficiency of \(\hat{\sigma}_{1}^{2}\) relative to \(\hat{\sigma}_{2}^{2}\): Equation Transcription: = = Text Transcription: Y_{1}, Y_{2}, ..., Y_{n} mu sigma^{2} sigma^{2} hat{sigma}_{1}^{2} = S^{2} = frac{1}{n-1} sum_{i =1}^{n} (Y_{i}-bar{Y})^2 hat{sigma}_{2}^{2 }= frac{1}{2} (Y_1 -Y_2)^2 hat{sigma}_{1}^{2} hat{sigma}_{2}^{2}
Read more -
Chapter 9: Problem 6 Mathematical Statistics with Applications 7
Suppose that \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample of size \(n\) from a Poisson distribution with mean \(\lambda\). Consider \(\hat{\lambda}_{1}=\left(Y_{1}+Y_{2}\right) / 2\) and \(\hat{\lambda}_{2}=\bar{Y}\). Derive the efficiency of \(\hat{\lambda}_{1}\) relative to \(\hat{\lambda}_{2}\). Equation Transcription: = = Text Transcription: Y_{1}, Y_{2}, ..., Y_{n} n hat{lambda}_{1} = (Y_1 + Y_2)/2 hat{lambda}_{2} = bar{Y} hat{lambda}_1 hat{lambda}_2
Read more -
Chapter 9: Problem 11 Mathematical Statistics with Applications 7
Applet Exercise Refer to Exercises 9.9 and 9.10. How can the results of several sequences of Bernoulli trials be simultaneously plotted? Access the applet PointbyPoint. Scroll down until you can view all six buttons under the top graph. a Do not change the value of p from the preset value = .5. Click the button “One Trial” a few times to verify that you are obtaining a result similar to those obtained in Exercise 9.9. Click the button “5 Trials” until you have generated a total of 50 trials. What is the value of \(\hat{p}_{50}\) that you obtained at the end of this first sequence of 50 trials? b Click the button “New Sequence.” The color of your initial graph changes from red to green. Click the button “5 Trials” a few times. What do you observe? Is the graph the same as the one you observed in part (a)? In what sense is it similar? c Click the button “New Sequence.” Generate a new sequence of 50 trials. Repeat until you have generated five sequences. Are the paths generated by the five sequences identical? In what sense are they similar? Equation Transcription: Text Transcription: \hatp_50
Read more -
Chapter 9: Problem 4 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample of size \(n\) from a uniform distribution on the interval \((0, \theta)\). If \(Y_{(1)}=\min \left(Y_{1}, Y_{2}, \ldots, Y_{n}\right)\), the result of Exercise 8.18 is that \(\hat{\theta}_{1}=(n+1) Y_{(1)}\) is an unbiased estimator for \(\theta\). If \(Y_{(n)}=\max \left(Y_{1}, Y_{2}, \ldots, Y_{n}\right)\), the results of Example 9.1 imply that \(\hat{\theta}_{2}=[(n+1) / n] Y_{(n)}\) is another unbiased estimator for \(\theta\). Show that the efficiency of \(\hat{\theta}_{1}\) to \(\hat{\theta}_{2}\) is \(1 / n^{2}\). Notice that this implies that \(\hat{\theta}_{2}\)is a markedly superior estimator. Equation Transcription: = min = = max = Text Transcription: Y_{1}, Y_{2}, ..., Y_{n} n (0, theta) Y_{(1)} = min (Y_{1}, Y_{2}, ..., Y_{n}) hat{theta}_{1}=(n+1) Y_{(1)} theta Y_{(n)} = max (Y_{1}, Y_{2}, ..., Y_{n}) hat{theta}_{2} = [(n+1) / n] Y_(n) theta hat{theta}_{1} hat{theta}_{2} 1/n^2 hat{theta}_{2}
Read more -
Chapter 9: Problem 7 Mathematical Statistics with Applications 7
Suppose that \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample of size \(n\) from an exponential distribution with density function given by \(f(y)=\left\{\begin{array}{lc}(1 / \theta) e^{-y / \theta}, & 0<y, \\0, & \text { elsewhere }\end{array}\right.\) In Exercise 8.19, we determined that \(\hat{\theta}_{1}=n Y_{(1)}\) is an unbiased estimator of \(\theta\) with MSE \(\left(\hat{\theta}_{1}\right)=\theta^{2}\). Consider the estimator \(\hat{\theta}_{2}=\bar{Y}\) and find the efficiency of \(\hat{\theta}_{1}\) relative to \(\hat{\theta}_{2}\). Equation Transcription: { = ( ) = = 1 2 Text Transcription: Y_{1}, Y_{2}, ..., Y_{n} n f(y) = {^{(1/theta)e^y/theta, 0 < y}{0, elsewhere hat{theta}_{1} = n Y_(1) (hat{theta}_1) = theta^2 hat{theta}_{2} = bar Y hat{theta}_{1} hat{theta}_{2}
Read more -
Chapter 9: Problem 8 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \cdots, Y_{n}\) denote a random sample from a probability density function \(f(y)\), which has unknown parameter \(\theta\). If \(\hat{\theta}\) is an unbiased estimator of \(\theta\), then under very general conditions \(V(\hat{\theta}) \geq I(\theta), \text { where } I(\theta)=\left[n E\left(-\frac{\partial^{2} \ln f(Y)}{\partial \theta^{2}}\right)\right]^{-1}\) (This is known as the Cramer-Rao inequality.) If \(V(\hat{\theta}) \geq I(\theta)\), the estimator \(\hat{\theta}\) is said to be efficient.1 a Suppose that \(f(y)\) is the normal density with mean \(\mu\) and variance \(\sigma^{2}\). Show that \(\bar{Y}\) is an efficient estimator of \(\mu\). b This inequality also holds for discrete probability functions \(p(y)\). Suppose that \(p(y)\) is the Poisson probability function with mean \(\lambda\). Show that \(\bar{Y}\) is an efficient estimator of \(\lambda\) Equation Transcription: Text Transcription: Y_1, Y_2, cdots, Y_n f(y) theta hat theta theta V hat theta geq I (theta), where I(theta)=[n E(-partial^2 ln f(Y)/partial theta^2]^-1} V hat theta)geq I(theta) hat theta f(y) mu sigma^2 barY mu p(y) p(y) lambda bar Y lambda
Read more -
Chapter 9: Problem 15 Mathematical Statistics with Applications 7
Refer to Exercise 9.3. Show that both \(\hat{\theta}_{1} \text { and } \widehat{\theta}_{2}\) are consistent estimators for \(\theta\) Equation Transcription: Text Transcription: \hat\theta_1 and \widehat\theta_2 \theta
Read more -
Chapter 9: Problem 9 Mathematical Statistics with Applications 7
Applet Exercise How was Figure obtained? Access the applet PointSingle at www. thomsonedu.com/statistics/wackerly. The top applet will generate a sequence of Bernoulli trials \(\left[X_{i}=1,0 \text { with } p(1)=p, p(0)=1-p\right]\) with \(p=.5\), a scenario equivalent to successively tossing a balanced coin. Let \(Y_{n}=\sum_{i=1}^{n} X_{i}=\) the number of \(1 s\) in the first trials and \(\hat{p}_{n}=Y_{n} / n\). For each , the applet computes \(\hat{p}_{n}\) and plots it versus the value of . a If \(\hat{p}_{5}=2 / 5\), what value of \(X_{6}\) will result in \(\hat{p}_{6}>\hat{p}_{5}\) ? b Click the button "One Trial" a single time. Your first observation is either 0 or 1 . Which value did you obtain? What was the value of \(\hat{p}_{1}\) ? Click the button "One Trial" several more times. How many trials n have you simulated? What value of ˆpn did you observe? Is the value close to .5, the true value of p? Is the graph a flat horizontal line? Why or why not? c Click the button “100 Trials” a single time. What do you observe? Click the button “100 Trials” repeatedly until the total number of trials is 1000. Is the graph that you obtained identical to the one given in Figure 9.1? In what sense is it similar to the graph in Figure 9.1? d Based on the sample of size 1000, what is the value of \(\hat{p} 1000\)? Is this value what you expected to observe? e Click the button “Reset.” Click the button “100 Trials” ten times to generate another sequence of values for \(\hat{p}\). Comment. Equation Transcription: Text Transcription: \left[X_i=1,0 with p(1)=p, p(0)=1-p\right] p=.5 Y_n=\sum_i=1^n X_i = 1 s \hatp_n=Y_n / n \hat{p}_{n} \hat{p}_{5}=2 / 5 X6 \hat p_ 6 >\hat p_5 \hat p_1 \hat p 1000 \hat p
Read more -
Chapter 9: Problem 14 Mathematical Statistics with Applications 7
Applet Exercise Refer to Exercise 9.13. Scroll down to the portion of the applet labeled “Mean of Normal Data.” Successive observed values of a standard normal random variable can be generated and used to compute the value of the sample mean \(\bar{Y}_{n}\). These successive values are then plotted versus the respective sample size to obtain one “sample path.” a Do you expect the values of \(\bar{Y}\)n to cluster around any particular value? What value? b If the results of 50 sample paths are plotted, how do you expect the variability of the estimates to change as a function of sample size? c Click the button “New Sequence” several times. Did you observe what you expected based on your answers to parts (a) and (b)? Equation Transcription: Text Transcription: \bar Y_n \bar Y
Read more -
Chapter 9: Problem 12 Mathematical Statistics with Applications 7
Applet Exercise Refer to Exercise 9.11. What happens if each sequence is longer? Scroll down to the portion of the screen labeled “Longer Sequences of Trials.” a Repeat the instructions in parts (a)–(c) of Exercise 9.11. b What do you expect to happen if p is not 0.5? Use the button in the lower right corner to change to value of p. Generate several sequences of trials. Comment.
Read more -
Chapter 9: Problem 10 Mathematical Statistics with Applications 7
Applet Exercise Refer to Exercise 9.9. Scroll down to the portion of the screen labeled “Try different probabilities.” Use the button labeled \(" \mathrm{p}="\) in the lower right corner of the display to change the value of p to a value other than .5. a Click the button “One Trial” a few times. What do you observe? b Click the button “100 Trials” a few times. What do you observe about the values of \(\hat{p}\) n as the number of trials gets larger? Equation Transcription: Text Transcription: "p=" \hat p
Read more -
Chapter 9: Problem 16 Mathematical Statistics with Applications 7
Refer to Exercise 9.5. Is \(\hat{\sigma}_{2}^{2}\) a consistent estimator of \(\sigma^{2}\)? Equation Transcription: Text Transcription: \hat\sigma_2^2 \sigma^2
Read more -
Chapter 9: Problem 13 Mathematical Statistics with Applications 7
Applet Exercise Refer to Exercises 9.9–9.12. Access the applet Point Estimation. a Chose a value for p. Click the button “New Sequence” repeatedly. What do you observe? b Scroll down to the portion of the applet labeled “More Trials.” Choose a value for p and click the button “New Sequence” repeatedly. You will obtain up to 50 sequences, each based on 1000 trials. How does the variability among the estimates change as a function of the sample size? How is this manifested in the display that you obtained?
Read more -
Chapter 9: Problem 18 Mathematical Statistics with Applications 7
In Exercise 9.17, suppose that the populations are normally distributed \(with\sigma_{1}^{2}=\sigma_{2}^{2}=\sigma^{2}\). Show that \(\frac{\sum_{i=1}^{n}\left(X_{i}-\bar{X}\right)^{2}+\sum_{i=1}^{n}\left(Y_{t}-\bar{Y}\right)^{2}}{2 n-2}\) is a consistent estimator of \(\sigma^{2}\). Equation Transcription: Text Transcription: sigma_{1}^{2} = sigma_{2}^{2} = sigma^2 frac{sum_{i=1}^{n}(X_{i}-bar{X})^2 + sum_{i=1}^{n}(Y_{t}-\bar{Y})^2}{2 n-2} sigma^2
Read more -
Chapter 9: Problem 17 Mathematical Statistics with Applications 7
Suppose that \(X_{1}, X_{2}, \ldots, X_{n}\) and \(Y_{1}, Y_{2}, \ldots, Y_{n}\) are independent random samples from populations with means \(\mu_{1}\) and \(\mu_{2}\) and variances \(\sigma_{2}^{1}\) and \(\sigma_{2}^{2}\), respectively. Show that \(\bar{X}-\bar{Y}\) is a consistent estimator of \(\mu_{1}-\mu_{2}\). Equation Transcription: - Text Transcription: X_1, X_2, …., X_n Y_1, Y_2, …., Y_n mu_1 mu_{1} sigma_{2}^{1} sigma_{2}^{1} bar{X} - bar{Y} mu_{1} - mu_{2}
Read more -
Chapter 9: Problem 20 Mathematical Statistics with Applications 7
Problem 20E If Y has a binomial distribution with n trials and success probability p, show that Y /n is a consistent estimator of p.
Read more -
Chapter 9: Problem 21 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) be a random sample of size \(n\) from a normal population with mean \(\mu\) and variance \(\sigma^{2}\). Assuming that \(n=2 k\) for some integer \(k\), one possible estimator for \(\sigma^{2}\) is given by \(\widehat{\sigma}^{2}=\frac{1}{2 k} \sum_{i=1}^{k}\left(Y_{2 i}-Y_{2 i-1}\right)^{2}\) a Show that \(\hat{\sigma}^{2}\) is an unbiased estimator for \(\sigma^{2}\). b Show that \(\hat{\sigma}^{2}\) is a consistent estimator for \(\sigma^{2}\). Equation Transcription: Text Transcription: Y_1, Y_2, …., Y_n n mu sigma^{2} n = 2 k k sigma^{2} hat{sigma}^{2} = frac{1}{2 k} sum_{i=1}^{k} (Y_{2 i}-Y_{2 i-1})^2 hat{sigma}^{2} sigma^{2} hat{sigma}^{2} sigma^{2}
Read more -
Chapter 9: Problem 19 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample from the probability density function \(f(y)=\left\{\begin{array}{ll} \theta y^{\theta-1}, & 0<y<1 \\ 0, & \text { elsewhere } \end{array}\right.\) where \(\theta>0\). Show that \(\bar{Y}\) is a consistent estimator of \theta \(/(\theta+1)\). Equation Transcription: { Text Transcription: Y_1, Y_2, \ldots, Y_n f(y)=\theta y^\theta-1, & 0<y<1 0, elsewhere \theta>0 \bar Y \theta /(\theta+1)
Read more -
Chapter 9: Problem 22 Mathematical Statistics with Applications 7
Refer to Exercise 9.21. Suppose that \(Y_{1}, Y_{2}, \ldots, Y_{n}\) is a random sample of size from a Poisson-distributed population with mean \(\lambda\). Again, assume that \(n=2 k\) for some integer . Consider \(\hat{\lambda}=\frac{1}{2 k} \sum_{i=1}^{k}\left(Y_{2 i}-Y_{2 i-1}\right)^{2}\) a Show that \(\hat{\lambda}\) is an unbiased estimator for \(\lambda\). b Show that \(\hat{\lambda}\) is a consistent estimator for \(\lambda\). Equation Transcription: Text Transcription: Y_1, Y_2, \ldots, Y_n \lambda n=2 k \hat\lambda=\frac1 2 k \sum_i=1^k\left(Y_2 i-Y_2 i-1\right)^2 \hat\lambda \lambda \hat\lambda \lambda ________________
Read more -
Chapter 9: Problem 23 Mathematical Statistics with Applications 7
Refer to Exercise 9.21. Suppose that \(Y_{1}, Y_{2}, \ldots, Y_{n}\) is a random sample of size \(n\) from a population for which the first four moments are finite. That is, \(m_{1}^{\prime}=E\left(Y_{1}\right)<\infty, m_{2}^{\prime}=E\left(Y_{1}^{2}\right)<\infty, m_{3}^{\prime}=E\left(Y_{1}^{3}\right)<\infty, \text { and } m_{4}^{\prime}=E\left(Y_{1}^{4}\right)<\infty\) (Note: This assumption is valid for the normal and Poisson distributions in Exercises 9.21 and 9.22, respectively.) Again, assume that \(n=2 k\) for some integer \(k\). Consider \(\widehat{\sigma}^{2}=\frac{1}{2 k} \sum_{i=1}^{k}\left(Y_{2 i}-Y_{2 i-1}\right)^{2}\) a Show that \(\hat{\sigma}^{2}\) is an unbiased estimator for \(\sigma^{2}\). b Show that \(\hat{\sigma}^{2}\) is a consistent estimator for \(\sigma^{2}\) c Why did you need the assumption that \(m_{4}^{\prime}=E\left(Y_{1}^{4}\right)<\infty\)? Equation Transcription: and Text Transcription: Y_1, Y_2, …., Y_n n M’_1 = E(Y_1) < infty, m’_2 = E(Y_{1}^{2}) < infty,m’_3 = E(Y_{1}^{3}) < infty, and m’_4 = E(Y_{1}^{4}) < infty n = 2 k k hat{sigma}^{2} = frac{1}{2 k} sum_{i=1}^{k} (Y_{2 i}-Y_{2 i-1})^2 hat{sigma}^{2} sigma^{2} hat{sigma}^{2} sigma^{2} m’_4 = E(Y_{1}^{4}) < infty
Read more -
Chapter 9: Problem 25 Mathematical Statistics with Applications 7
Suppose that denote a random sample of size from a normal distribution with mean \(\mu\) and variance 1. Consider the first observation \(Y_{1}\) as an estimator for \(\mu\). a Show that \(Y_{1}\) is an unbiased estimator for \(\mu\). b Find \(P\left(\left|Y_{1}-\mu\right| \leq 1\right)\). c Look at the basic definition of consistency given in Definition 9.2. Based on the result of part (b), is \(Y_{1}\) a consistent estimator for ? Equation Transcription: Text Transcription: Y1, Y2,...,Yn \mu Y_1 \mu Y_1 \mu P\left(\left|Y_1-\mu\right| \leq 1\right) Y_1
Read more -
Chapter 9: Problem 24 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) be independent standard normal random variables. a What is the distribution of \(\sum_{i=1}^{n} Y_{1}^{2}\)? b Let \(W_{n}=\frac{1}{2} \sum_{i=1}^{n} Y_{1}^{2}\). Does \(W_{n}\) converge in probability to some constant? If so, what is the value of the constant? Equation Transcription: Text Transcription: Y_1, Y_2, …., Y_n sum_{i=1}^{n} Y_{1}^{2} W_n = frac{1}{2} sum_{i=1}^{n} Y_{1}^{2} W_n
Read more -
Chapter 9: Problem 26 Mathematical Statistics with Applications 7
It is sometimes relatively easy to establish consistency or lack of consistency by appealing directly to Definition 9.2, evaluating \(P\left(\left|\widehat{\theta}_{n}-\theta\right| \leq \varepsilon\right)\)directly, and then showing that \(lim _{n \rightarrow \infty} P\left(\left|\hat{\theta}_{2}-\theta\right| \leq \varepsilon\right) = 1\). Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample of size \(n\) from a uniform distribution on the interval \((0, \theta)\). If \(Y_{(n)}=\max \left(Y_{1}, Y_{2}, \ldots, Y_{n}\right)\) we showed in Exercise 6.74 that the probability distribution function of \(Y_{(n)}\) is given by \(F_{(n)}(y)=\left\{\begin{array}{cc}0, & y<0 \\(y / \theta)^{n}, & 0 \leq y \leq \theta, \\1, & y>\theta .]\end{array}\right.\) a For each \(n \geq 1\) and every \(\varepsilon>\theta\), it follows that \(P\left(\left|Y_{(n)}-\theta\right| \leq \varepsilon\right)=P\left(\theta-\varepsilon \leq Y_{(n)} \leq \theta+\varepsilon\right.\). If \(\varepsilon>\theta\), verify that \(P\left(\theta-\varepsilon \leq Y_{(n)} \leq \theta+\varepsilon\right)=1\) and that, for every positive \(\varepsilon>\theta\), we obtain \(P\left(\theta-\varepsilon \leq Y_{(n)} \leq \theta+\varepsilon\right)=1-[(\theta-\varepsilon) / \theta]^{n}\) b Using the result from part (a), show that \(Y_{(n)}\) is a consistent estimator for \(\theta\) by showing that, for every \(\varepsilon>0, \lim _{n \rightarrow \infty} P\left(\left|Y_{(n)}-\theta\right| \leq \varepsilon\right)=1\). Equation Transcription: = max { Text Transcription: P(hat{theta}_{n} - theta| leq varepsilon) lim _{n rightarrow infty} P(|hat{theta}_{2} -theta| leq varepsion) = 1 Y_{1}, Y_{2}, \ldots, Y_{n} n (0, theta) Y_(n) = max (Y_1, Y_2, …., Y_n) Y_n F_{(n)}(y) = {\begin{array}{cl}0, & y<0 \\(y/theta)^{n}, & 0 leq y leq theta \\1, & y > theta.] n geq 1 varepsilon > theta P(|Y_{(n)} - theta| leq varepsilon) = P(theta - varepsilon leq Y_{(n)} leq theta + varepsilon. varepsilon > theta P(theta - varepsilon leq Y_{(n)} leq theta + varepsilon) = 1 varepsilon > theta P(theta - varepsilon leq Y_{(n)} leq theta + varepsilon) = 1 -[(theta - varepsilon) / theta]^n Y_n theta varepsilon > 0, lim _{n rightarrow infty} P(|Y_{(n)} - theta| leq varepsilon) = 1
Read more -
Chapter 9: Problem 30 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) be independent random variables, each with probability density function \(f(y)=\left\{\begin{array}{ll} 3 y^{2}, & 0 \leq y \leq 1 \\ 0, & \text { elsewhere } \end{array}\right.\) Show that \(\bar{Y}\) converges in probability to some constant and find the constant. Equation Transcription: { Text Transcription: Y1, Y2,...,Yn f(y)={ 3 y^2, & 0 \leq y \leq 1 \\ 0, elsewhere \bar Y
Read more -
Chapter 9: Problem 28 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample of size \(n\) from a Pareto distribution (see Exercise 6.18). Then the methods of Section 6.7 imply that \(Y_{(1)}=\min \left(Y_{1}, Y_{2}, \ldots, Y_{n}\right)\) has the distribution function given by \(F_{(1)}(y)=\left\{\begin{array}{lc}0, & y \leq \beta \\1-(\beta / y)^{an}, & y>\beta\end{array}\right\}\) Use the method described in Exercise 9.26 to show that \(Y_{(1)}\) is a consistent estimator of \(\beta\). Equation Transcription: = min = Text Transcription: Y_1, Y_2, …., Y_n n F_{(1)}(y) = {{lc}0, & y leq beta \\1-(beta / y)^{an}, & y > beta} Y_(1) beta
Read more -
Chapter 9: Problem 27 Mathematical Statistics with Applications 7
Use the method described in Exercise 9.26 to show that, if \(Y_{(1)}=\min \left(Y_{1}, Y_{2}, \ldots, Y_{n}\right)\) when \(Y_{1}, Y_{2}, \ldots, Y_{n}\) are independent uniform random variables on the interval \((0, \theta)\), then \(Y_{(1)}\) is not a consistent estimator for \(\theta\). [Hint: Based on the methods of Section 6.7, \(Y_{(1)}\) has the distribution function \(F_{(1)}(y)=\left\{\begin{array}{ll}0, & y<0 \\1-(y / \theta)^{n}, & 0 \leq y \leq 8, \\1, & y>\theta .]\end{array}\right.\) Equation Transcription: = min { Text Transcription: Y_(1) = min (Y_1, Y_2, …., Y_n) Y_1, Y_2, …., Y_n (0, theta) Y_(1) theta Y_(1) F_{(1)}(y) = {0, & y < 0 \\1- (y / theta)^{n}, & 0 leq y leq 8, \\1, & y > theta .]
Read more -
Chapter 9: Problem 29 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample of size \(n\) from a power family distribution (see Exercise 6.17). Then the methods of Section 6.7 imply that \(Y_{(n)}=\max \left(Y_{1}, Y_{2}, \ldots, Y_{n}\right)\) has the distribution function given by \(F_{(1)}(y)=\left\{\begin{array}{ll}0, & y<0 \\(y / \theta)^{n}, & 0 \leqslant y \leq \theta, \\1, & y>\theta .]\end{array}\right.\) Use the method described in Exercise 9.26 to show that \(Y_{(n)}\) is a consistent estimator of \(\theta\). Equation Transcription: = max { Text Transcription: Y_1, Y_2, …., Y_n n Y_{(n)} = max (Y_1, Y_2, …., Y_n) F_{(1)}(y) = {0, & y<0 \\ (y / theta)^{n}, & 0 leqslant y leq theta, 1, & y > theta .] Y_(1) theta
Read more -
Chapter 9: Problem 31 Mathematical Statistics with Applications 7
If \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample from a gamma distribution with parameters \(\alpha \text { and } \beta\), show that \(\bar{Y}\) converges in probability to some constant and find the constant. Equation Transcription: Text Transcription: Y1, Y2,...,Yn \alpha and \beta \bar Y
Read more -
Chapter 9: Problem 32 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample from the probability density function \(f(y)=\left\{\begin{array}{ll} \frac{2}{y^{2}}, & y \geq 2 \\ 0, & \text { elsewhere } \end{array}\right.\) Does the law of large numbers apply to \(\bar{Y}\) in this case? Why or why not? Equation Transcription: { Text Transcription: Y1, Y2,...,Yn f(y)=\frac 2 y^2, & y \geq 2 0, elsewhere \bar Y
Read more -
Chapter 9: Problem 33 Mathematical Statistics with Applications 7
Problem 33E An experimenter wishes to compare the numbers of bacteria of types A and B in samples of water. A total of n independent water samples are taken, and counts are made for each sample. Let Xi denote the number of type A bacteria and Yi denote the number of type B bacteria for sample i. Assume that the two bacteria types are sparsely distributed within a water sample so that X1, X2, . . . , Xn and Y1, Y2, . . . , Yn can be considered independent random samples from Poisson distributions with means ?1 and ?2, respectively. Suggest an estimator of ?1/(?1 + ?2). What properties does your estimator have?
Read more -
Chapter 9: Problem 34 Mathematical Statistics with Applications 7
The Rayleigh density function is given by \(f(y)=\left\{\begin{array}{cl}\left(\frac{2 y}{\theta}\right) e^{-y^{2} / \theta}, & y>0 \\0 & \text { elsewhere }\end{array}\right.\) In Exercise 6.34(a), you established that \(Y^{2}\)has an exponential distribution with mean \(\theta\). If \(Y_{1}, Y_{2}, \ldots, Y_{n}\)denote a random sample from a Rayleigh distribution, show that \(W_{n}=\frac{1}{n} \sum_{i=1}^{n} Y_{i}^{2}\) is a consistent estimator for \(\theta\). Equation Transcription: { Text Transcription: f(x) = {(frac{2 y}{theta}) e^{-y^{2} /theta}, & y > 0 0 & elsewhere } Y^2 theta Y_1, Y_2, …., Y_n W_n = frac{1}{n} sum_{i=1}^{n} Y_{i}^{2} theta
Read more -
Chapter 9: Problem 39 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample from a Poisson distribution with parameter \(\lambda\). Show by conditioning that \(\sum_{i=1}^{n} Y_{i}\) is sufficient for \(\lambda\). Equation Transcription: Text Transcription: Y1, Y2,...,Yn \lambda \sum_i=1^n Y_i \lambda
Read more -
Chapter 9: Problem 37 Mathematical Statistics with Applications 7
Let denote independent and identically distributed Bernoulli random variables such that \(P\left(X_{i}=1\right)=p \text { and } P\left(X_{i}=0\right)=1-p\) for each \(i=1,2, \ldots, n\) Show that \(\sum_{i=1}^{n} X_{i}\) is sufficient for by using the factorization criterion given in Theorem . Equation Transcription: Text Transcription: X1, X2,...,Xn P (X_i=1)=p and P (X_i=0)=1-p i=1,2,...,n \sum_i=1^n X_i
Read more -
Chapter 9: Problem 36 Mathematical Statistics with Applications 7
Suppose that has a binomial distribution based on trials and success probability . Then \(\hat{p}_{n}=Y / n\) is an unbiased estimator of . Use Theorem to prove that the distribution of \(\left(\hat{p}_{n}-p\right) / \sqrt{\hat{p}_{n} \hat{q}_{n} / n}\) converges to a standard normal distribution. [Hint: Write Y as we did in Section 7.5.] Equation Transcription: Text Transcription: \hat p_n=Y / n (\hatp_n-p) / \sqrt\hat p_n \hat n q_n / n
Read more -
Chapter 9: Problem 38 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample from a normal distribution with mean \(\mu\) and variance \(\sigma^{2}\) a. If \(\mu\) is unknown and \(\sigma^{2}\) is known, show that \(\bar{Y}\) is sufficient for \(\mu\). b. If \(\mu\) is known and \(\sigma^{2}\) is unknown, show that \(\sum_{i=1}^{n}\left(Y_{i}-\mu\right)^{2}\) is sufficient for \(\sigma^{2}\). c. If \(\mu\) and \(\sigma^{2}\) are both unknown, show that \(\sum_{i=1}^{n} Y_{i}\) and \(\sum_{i=1}^{n} Y_{i}^{2}\) are jointly sufficient for \(\mu\) and \(\sigma^{2}\). [Thus, it follows that \(\bar{Y}\) and \(\sum_{i=1}^{n}\left(Y_{i}-\bar{Y}\right)^{2}\) or \(\bar{Y}\) and \(S^{2}\) are also jointly sufficient for \(\mu\) and \(\sigma^{2}\).] Equation Transcription: Text Transcription: Y_1, Y_2, …., Y_n mu sigma^2 mu bar{Y} sigma^2 mu mu sigma^2 sum_{i=1}^{n} (Y_{i} -mu)^2 sigma^2 mu sigma^2 sum_{i=1}^{n} Y_{i} sum_{i=1}^{n} Y_{i}^{2} mu sigma^2 bar{Y} sum_{i=1}^{n} (Y_{i}-\bar{Y})^2 S^2 mu sigma^2
Read more -
Chapter 9: Problem 40 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample from a Rayleigh distribution with parameter \(\theta\). (Refer to Exercise 9.34.) Show that \(\sum_{i=1}^{n} Y_{i}^{2}\) is sufficient for \(\theta\). Equation Transcription: Text Transcription: Y_1, Y_2, …., Y_n theta sum_{i = 1}^{n} Y_{i}^{2} theta
Read more -
Chapter 9: Problem 35 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots\) be a sequence of random variables with \(E\left(Y_{i}\right)=\mu\) and \(V\left(Y_{i}\right)=\sigma_{i}^{2}\). Notice that the \(\sigma_{i}^{2,}\)s are not all equal. a. What is \(E\left(\bar{Y}_{n}\right)\)? b. What is \(V\left(\bar{Y}_{n}\right)\)? c. Under what condition (on the \(\sigma_{i}^{2,}\)s) can Theorem 9.1 be applied to show that \(\bar{Y}_{n}\) is a consistent estimator for \(\mu\)? Equation Transcription: Text Transcription: Y_1, Y_2, …. E(Y_i) = mu V(Y_i) = sigma_{i}^{2} sigma_{i}^{2,} E(bar{Y}_{n}) V(bar{Y}_{n}) sigma_{i}^{2,} bar{Y}_{n} mu
Read more -
Chapter 9: Problem 41 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample from a Weibull distribution with known \(m\) and unknown \(\alpha\). (Refer to Exercise 6.26.) Show that \(\sum_{i=1}^{n} Y_{i}^{m}\) is sufficient for \(\alpha\). Equation Transcription: Text Transcription: Y1, Y2,...,Yn m \alpha \sum_i=1^n Y_i^m \alpha
Read more -
Chapter 9: Problem 42 Mathematical Statistics with Applications 7
If \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample from a geometric distribution with parameter , show that \(\bar{Y}\) is sufficient for . Equation Transcription: Text Transcription: Y1, Y2,...,Yn \bar Y
Read more -
Chapter 9: Problem 44 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote independent and identically distributed random variables from a Pareto distribution with parameters \(\alpha \text { and } \beta\). Then, by the result in Exercise 6.18, if \(\alpha, \beta>0\), \(f(y \mid \alpha, \beta)=\left\{\begin{array}{ll} \alpha \beta^{\alpha} y^{-(\alpha+1)}, & y \geq \beta \\ 0, & \text { elsewhere } \end{array}\right.\) If \(\beta\) is known, show that \(\prod_{i=1}^{n} Y_{i}\) is sufficient for \(\alpha\) Equation Transcription: { Text Transcription: Y1, Y2,...,Yn \alpha \text and \beta \alpha, \beta>0 \(f(y \mid \alpha, \beta)={ \alpha \beta^\alpha y^-(\alpha+1), & y \geq 0, elsewhere \beta \prod_i=1^n Y_i \alpha
Read more -
Chapter 9: Problem 45 Mathematical Statistics with Applications 7
Suppose that \(Y_{1}, Y_{2}, \ldots, Y_{n}\) is a random sample from a probability density function in the (one-parameter) exponential family so that \(f(y \mid \theta)=\left\{\begin{array}{ll} a(\theta) b(y) e^{-[c(\theta) d(y)]}, & a \leq y \leq b \\ 0, & \text { elsewhere } \end{array}\right.\) where \(a \text { and } b\) do not depend on \(\theta\). Show that \(\sum_{i=1}^{n} d\left(Y_{i}\right)\) is sufficient for \(\theta\). Equation Transcription: { Text Transcription: Y1, Y2,...,Yn f(y \mid \theta)={ a(\theta) b(y) e^-[c(\theta) d(y)], & a \leq y \leq b 0, elsewhere a and b \theta \sum_i=1^n d\left(Y_i \right) \theta
Read more -
Chapter 9: Problem 46 Mathematical Statistics with Applications 7
Problem 46E If Y1, Y2, . . . , Yn denote a random sample from an exponential distribution with mean ?, show that f (y | ?) is in the exponential family and that Y is sufficient for ?.
Read more -
Chapter 9: Problem 47 Mathematical Statistics with Applications 7
Refer to Exercise If \(\theta\) is known, show that the power family of distributions is in the exponential family. What is a sufficient statistic for \(\alpha\)? Does this contradict your answer to Exercise Equation Transcription: Text Transcription: \theta \alpha
Read more -
Chapter 9: Problem 49 Mathematical Statistics with Applications 7
Problem 49E Let Y1, Y2, . . . , Yn denote a random sample from the uniform distribution over the interval (0, ?). Show that Y(n) = max(Y1, Y2, . . . , Yn ) is sufficient for ? .
Read more -
Chapter 9: Problem 48 Mathematical Statistics with Applications 7
Refer to Exercise . If \(\beta\) is known, show that the Pareto distribution is in the exponential family. What is a sufficient statistic for \(\alpha\)? Argue that there is no contradiction between you answer to this exercise and the answer you found in Exercise Equation Transcription: Text Transcription: \beta \alpha
Read more -
Chapter 9: Problem 50 Mathematical Statistics with Applications 7
Problem 50E Let Y1, Y2, . . . , Yn denote a random sample from the uniform distribution over the interval (?1, ?2). Show that Y(1) = min(Y1, Y2, . . . , Yn ) and Y(n) = max(Y1, Y2, . . . , Yn ) are jointly sufficient for ?1 and ?2.
Read more -
Chapter 9: Problem 52 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) be a random sample from a population with density function \(f(y \mid \theta)=\left\{\begin{array}{ll}\frac{3 y^{2}}{\theta^{3}}, & 0 \leq y \leq \theta \\0, & \text { elsewhere }\end{array}\right.\) Show that \(Y_{(n)}=\max \left(Y_{1}, Y_{2}, \ldots, Y_{n}\right)\) is sufficient for \(\theta\). Equation Transcription: { = max Text Transcription: Y_1, Y_2, …., Y_n f(y mid | theta) = {frac{3 y^{2}}{8^{3}}, & 0 leq y leq theta 0 & text { elsewhere} Y_(n) = max (Y_1, Y_2, …., Y_n) theta
Read more -
Chapter 9: Problem 51 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample from the probability density function \(f(y \mid \theta)=\left\{\begin{array}{ll}e^{-(y-\theta),} & y \geq \theta \\0, & \text { elsewhere }\end{array}\right.\) Show that \(Y_{(n)}=\min \left(Y_{1}, Y_{2}, \ldots, Y_{n}\right)\) is sufficient for \(\theta\). Equation Transcription: = min Text Transcription: Y_1, Y_2, …., Y_n f(y | theta) = {e^{-(y - theta),} & y geq theta 0, & \text { elsewhere } Y_(n) = min (Y_1, Y_2, …., Y_n) theta
Read more -
Chapter 9: Problem 53 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) be a random sample from a population with density function \(f(y \mid \theta)=\left\{\begin{array}{ll}\frac{2 \theta^{2}}{y^{3}} & \theta<y<\infty \\0, & \text { elsewhere }\end{array}\right.\) Show that \(Y_{(n)}=\min \left(Y_{1}, Y_{2}, \ldots, Y_{n}\right)\) is sufficient for \(\theta\). Equation Transcription: = min Text Transcription: Y_1, Y_2, …., Y_n f(y | theta) = {frac{2 theta^{2}}{y^{3}} & theta < y < infty 0, & elsewhere }. Y_(n) = min (Y_1, Y_2, …., Y_n) theta
Read more -
Chapter 9: Problem 54 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote independent and identically distributed random variables from a power family distribution with parameters \(\alpha\) and \(\theta\). Then, as in Exercise 9.43, if \(\alpha,\ \theta\ >\ 0,\) \(f(y \mid \alpha, \theta)=\left\{\begin{array}{ll}\alpha y^{\alpha-1} / \theta^{a}, & 0 \leq y \leq \theta \\0, & \text { elsewhere }\end{array}\right.\) Show that max \(Y_{1}, Y_{2}, \ldots, Y_{n}\) and \(\prod_{i=1}^{n} Y_{i}\) are jointly sufficient for \(\alpha\) and \(\theta\). Equation Transcription: Text Transcription: Y_1, Y_2, …., Y_n alpha theta alpha, theta > 0, f(y | alpha, theta) = {alpha y^{alpha-1} / theta^{a}, & 0 leq y leq theta 0, & elsewhere }. Y_1, Y_2, …., Y_n prod_{i=1}^{n} Y_i alpha theta
Read more -
Chapter 9: Problem 55 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote independent and identically distributed random variables from a Pareto distribution with parameters \(\alpha\) and \(\beta\). Then, as in Exercise 9.44, if \(\alpha,\ \beta\ >\ 0,\) \(f(y \mid \alpha, \beta)=\left\{\begin{array}{ll}\alpha \beta^{\alpha} y^{-(\alpha+1)}, & y \geq \beta \\0, & \text { elsewhere }\end{array}\right.\) Show that \(\prod_{i=1}^{n} Y_{i}\) and min \(Y_{1}, Y_{2}, \ldots, Y_{n}\) are jointly sufficient for \(\alpha\) and \(\beta\). Equation Transcription: Text Transcription: Y_1, Y_2, …., Y_n alpha beta alpha, beta > 0, f(y | alpha, beta) = {alpha beta^{alpha} y^{-(alpha+1)}, & y geq beta 0, & elsewhere }. prod_{i=1}^{n} Y_i Y_1, Y_2, …., Y_n alpha beta
Read more -
Chapter 9: Problem 57 Mathematical Statistics with Applications 7
Refer to Exercise 9.18. Is the estimator of \(\sigma^{2}\) given there an MVUE of \(\sigma^{2}\) ? Equation Transcription: Text Transcription: \sigma^{2} \sigma^{2}
Read more -
Chapter 9: Problem 58 Mathematical Statistics with Applications 7
Refer to Exercise 9.40. Use \(\sum_{i=1}^{n} Y_{i}^{2}\) to find an MVUE of \(\theta\). Equation Transcription: Text Transcription: sum_{i=1}^{n} Y_{i}^{2} theta
Read more -
Chapter 9: Problem 59 Mathematical Statistics with Applications 7
Problem 59E The number of breakdowns Y per day for a certain machine is a Poisson random variable with mean ?. The daily cost of repairing these breakdowns is given by C = 3Y 2. If Y1, Y2, . . . , Yndenote the observed number of breakdowns for n independently selected days, find an MVUE for E(C).
Read more -
Chapter 9: Problem 56 Mathematical Statistics with Applications 7
Refer to Exercise . Find an MVUE of \(\sigma^{2}\). Equation Transcription: Text Transcription: \sigma^{2}
Read more -
Chapter 9: Problem 60 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2, \cdots,} Y_{n}\) denote a random sample from the probability density function \(f(y \mid \theta)=\left\{\begin{array}{ll} \theta y^{\theta-1}, & 0<y<1, \theta>0 \\ 0, & \text {elsewhere } \end{array}\right.\) a Show that this density function is in the (one-parameter) exponential family and that \(\sum_{i=1}^{n}-\ln \left(Y_{i}\right)\) is sufficient for \(\theta\). (See Exercise 9.45.) b If \(W_{i}=-\ln \left(Y_{i}\right)\) show that \(W_{i}\) has an exponential distribution with mean \(1 / \theta\). c Use methods similar to those in Example 9.10 to show that \(2 \theta \sum_{i=1}^{n} W_{i}\) has a \(x^{2}\) distribution with \(2n\) df. d Show that \(E\left(\frac{1}{2 \theta \sum_{i-1}^{n} W_{i}}\right)=\frac{1}{2(n-1)}\) [Hint: Recall Exercise 4.112.] e What is the MVUE for \(\theta\)? Equation Transcription: { Text Transcription: Y_1, Y_2, cdots, Y_n f(y mid theta) = {begin l theta y^theta-1, 0<y<1, theta>0 0,elsewhere } sum_i = 1^n-lnY_i theta W_i=-ln (Y_i W_i 1 / theta 2 theta sum_i=1^n W_i x^2 2n E(1/2 theta sum_i-1^n W_i=(1/2(n-1) theta
Read more -
Chapter 9: Problem 62 Mathematical Statistics with Applications 7
Refer to Exercise . Find a function of \(Y_{(1)}\) that is an MVUE for \(\theta\) Equation Transcription: Text Transcription: Y_(1) \theta
Read more -
Chapter 9: Problem 63 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) be a random sample from a population with density function \(f(y \mid \theta)=\left\{\begin{array}{ll}\frac{3 y^{2}}{\theta^{3}}, & 0 \leq y \leq \theta \\0, & \text { elsewhere }\end{array}\right.\) In Exercise 9.52 you showed that \(Y_{(n)}=\max \left(Y_{1}, Y_{2}, \ldots, Y_{n}\right)\) is sufficient for \(\theta\). a Show that \(Y_{(n)}\) has probability density function \(f_{(n)}(y \mid \theta)=\left\{\begin{array}{cc}\frac{3 n y^{3 n-1}}{\theta^{3 n}}, & 0 \leq y \leq \theta, \\0, & \text { elsewhere }\end{array}\right.\) b Find the MVUE of \(\theta\). Equation Transcription: = max Text Transcription: Y_1, Y_2, …., Y_n f(y | theta) = {frac{3 y^{2}}{theta^{3}}, & 0 leq y leq theta 0, & elsewhere }. Y_(n) = max (Y_1, Y_2, ...., Y_n) theta Y_(n) f_(n)(y | theta) = {frac{3 n y^{3 n-1}}{theta^{3 n}}, & 0 leq y leq theta, 0, & elsewhere }. theta
Read more -
Chapter 9: Problem 61 Mathematical Statistics with Applications 7
Refer to Exercise 9.49. Use \(Y_{(n)}\) to find an MVUE of \(\theta\). (See Example 9.1.) Equation Transcription: Text Transcription: Y_(n) \theta
Read more -
Chapter 9: Problem 64 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) be a random sample from a normal distribution with mean \(\mu\) and variance 1 a Show that the MVUE of \(\mu^{2}\) is \(\hat{\mu^{2}}=\bar{Y}^{2}-1 / n\) b Derive the variance of \(\hat{\mu}^{2}\). Equation Transcription: Text Transcription: Y_1, Y_2, …., Y_n mu mu^2 hat mu^2 = bar{Y}^{2} -1/n hat mu^2
Read more -
Chapter 9: Problem 65 Mathematical Statistics with Applications 7
In this exercise, we illustrate the direct use of the Rao-Blackwell theorem. Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) be independent Bernoulli random variables with \(p\left(y_i\mid p\right)=p^{y_2}(1-p)^{1-y_2},\quad\ \ \ \ \ \ \ \ \ y_i=0,1\). That is, \(P\left(Y_{i}=1\right)=p\) and \(P\left(Y_{i}=0\right)=1-p\). Find the MVUE of \(p(1-p)\), which is a term in the variance of \(Y_{i}\) or \(W=\sum_{i=1}^{n} Y_{i}\), by the following steps. a Let \(T=\left\{\begin{array}{ll}1, & \text { if } Y_{1}=1 \text { and } Y_{2}=0 \\0, & \text { otherwise }\end{array}\right.\) Show that \(E(T)=p(1-p)\) b Show that \(P(T=1 \mid W=w)=\frac{w(n-w)}{n(n-1)}\) c Show that \(E(T \mid W)=\frac{n}{n-1}\left[\frac{W}{n}\left(1-\frac{W}{n}\right)\right]=\frac{n}{n-1} \bar{Y}(1-\bar{Y)}\) and hence that \(n \bar{Y}(1-\bar{Y}) /(n-1)\) is the MVUE of \(p(1-p)\) Equation Transcription: Text Transcription: Y_1, Y_2, …., Y_n p(y_i | p) = p^{y_2}(1-p)^{1-y_2}, y_i = 0,1 P(Y_{i} = 1) = p P(Y_{i} = 0) = 1-p p(1-p) Y_i W = sum_{i=1}^{n} Y_i T = {1, & { if } Y_{1} = 1 { and } Y_{2} = 0 0, & otherwise. E(T) = p(1-p) P(T =1 | W = w) = frac{w(n-w)}{n(n-1)} E(T | W) = frac{n}{n-1} [rac{W}{n}(1 - frac{W}{n})] = {n}{n-1} bar{Y}(1 - bar{Y)} n bar{Y}(1 - bar{Y}) / (n-1) p(1-p)
Read more -
Chapter 9: Problem 68 Mathematical Statistics with Applications 7
Problem 68E Suppose that a statistic U has a probability density function that is positive over the interval a ? u ? b and suppose that the density depends on a parameter ? that can range over the interval ?1 ? ? ? ?2. Suppose also that g(u) is continuous for u in the interval [a, b]. If E[g(U ) | ? ] = 0 for all ? in the interval [?1, ?2] implies that g(u) is identically zero, then the family of density functions { fU (u | ?), ?1 ? ? ? ?2} is said to be complete. (All statistics that we employed in Section 9.5 have complete families of density functions.) Suppose that U is a sufficient statistic for ?, and g1(U ) and g2(U ) are both unbiased estimators of ?. Show that, if the family of density functions for U is complete, g1(U ) must equal g2(U ), and thus there is a unique function of U that is an unbiased estimator of ? . Coupled with the Rao–Blackwell theorem, the property of completeness of fU (u | ?), along with the sufficiency of U, assures us that there is a unique minimum-variance unbiased estimator (UMVUE) of ? .
Read more -
Chapter 9: Problem 67 Mathematical Statistics with Applications 7
Refer to Exercise 9.66. Suppose that a sample of size \(n\) is taken from a normal population with mean \(\mu\) and variance \(\sigma^{2}\). Show that \(\sum_{i=1}^{n}\), and \(\sum_{i=1}^{n} Y_{i}^{2}\) jointly form minimal sufficient statistics for \(\mu\) and \(\sigma^{2}\). Equation Transcription: Text Transcription: n mu sigma^2 sum_{i=1}^{n} sum_{i=1}^{n} Y_{i}^{2} mu sigma^2
Read more -
Chapter 9: Problem 69 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2, \cdots,} Y_{n}\) denote a random sample from the probability density function \(f(y \mid \theta)=\left\{\begin{array}{cc} (\theta+1) y^{\theta}, & 0<y<1 ; \theta>-1 \\ 0, & \text {elsewhere } \end{array}\right.\) Find an estimator for \(\theta\) by the method of moments. Show that the estimator is consistent. Is the estimator a function of the sufficient statistic \(-\sum_{i=1}^{n} \ln \left(Y_{i}\right)\) that we can obtain from the factorization criterion? What implications does this have? Equation Transcription: { - Text Transcription: Y_1, Y_2, cdots, Y_n f(y mid theta)=\{ (theta+1 y^theta, 0<y<1 ; theta>-1 0, elsewhere Theta -Sum i=1 ^n -ln(Y_i)
Read more -
Chapter 9: Problem 70 Mathematical Statistics with Applications 7
Problem 70E Suppose that Y1, Y2, . . . , Yn constitute a random sample from a Poisson distribution with mean ?. Find the method-of-moments estimator of ?.
Read more -
Chapter 9: Problem 66 Mathematical Statistics with Applications 7
The likelihood function \(L\left(y_{1}, y_{2}, \ldots, y_{n} \mid \theta\right)\) takes on different values depending on the arguments \(\left(y_{1}, y_{2}, \ldots, y_{n}\right)\). A method for deriving a minimal sufficient statistic developed by Lehmann and Scheffé uses the ratio of the likelihoods evaluated at two points, \(\left(x_{1}, x_{2}, \ldots, x_{n}\right)\) and \(\left(y_{1}, y_{2}, \ldots, y_{n}\right)\): \(\frac{L\left(x_{1}, x_{2}, \ldots, x_{n} \mid \theta\right)}{L\left(y_{1}, y_{2}, \ldots, y_{n} \mid \theta\right)}\) Many times it is possible to find a function \(\left(x_{1}, x_{2}, \ldots, x_{n}\right)\) such that this ratio is free of the unknown parameter \(\theta\) if and only if \(g\left(x_{1}, x_{2}, \ldots, x_{n}\right)=g\left(y_{1}, y_{2}, \ldots, y_{n}\right)\). If such a function \(g\) can be found, then \(g\left(Y_{1}, Y_{2}, \ldots, Y_{n}\right)i\) s a minimal sufficient statistic for \(\theta\). a Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) be a random sample from a Bernoulli distribution (see Example 9.6 and Exercise 9.65 ) with \(p\) unknown. i Show that \(\frac{L\left(x_{1}, x_{2}, \ldots, x_{n} \mid p\right)}{L\left(y_{1} y_{2}, \ldots y_{n} \mid p\right)}=\left(\frac{p}{1-p}\right)^{z x_{i}-\Sigma_{y}}\) ii Argue that for this ratio to be independent of \(p\), we must have \(\sum_{i=1}^{n} x_{i}-\sum_{i=1}^{n} y_{i}=0 \text { or } \sum_{i=1}^{n} x_{i}=\sum_{i=1}^{n} y_{i}\) iii Using the method of Lehmann and Scheffé, what is a minimal sufficient statistic for \(p\)? How does this sufficient statistic compare to the sufficient statistic derived in Example 9.6 by using the factorization criterion? b Consider the Weibull density discussed in Example 9.7 i Show that \(\frac{L\left(x_{1}, x_{2}, \ldots, x_{n} \mid p\right)}{L\left(y_{1} y_{2}, \ldots y_{n} \mid p\right)}=\left(\frac{x_{1} x_{2} \cdots x_{n}}{y_{1} y_{2} \cdots y_{n}}\right) \exp \left[-\frac{1}{\theta}\left(\sum_{i=1}^{n} x_{i}^{2}-\sum_{i=1}^{n} y_{i}^{2}\right)\right]\) ii Argue that \(\sum_{i=1}^{n} Y_{i}^{2}\) is a minimal sufficient statistic for\(\theta\). Equation Transcription: Text Transcription: L(y_1,y_2,...,y_n|) (y_1,y_2,...,y_n) (x_1,x_2,...,x_n) (y_1,y_2,...,y_n) L(x_1,x_2,...,x_n|theta)/L(y_1,y_2,...,y_n|theta) g(x_1,x_2,...,x_n) g(x_1,x_2,...,x_n)=g(y_1,y_2,...,y_n) g g(Y_1,Y_2,...,Y_n) Y_1,Y_2,...,Y_n p L(x_1,x_2,...,x_n|p)/L(y_1,y_2,...,y_n|p)=(p/1-p)^zx_i-sum_y p Sum over i=1 ^n x_i- sum over i=1 ^n y_i=0 sum over i=1 ^n x_i = sum over i=1 ^n y_i p L(x_1,x_2,...,x_n|p)/L(y_1,y_2,...,y_n|p)=(x_1 x_2...x_n/y_1 y_2... y_n)exp-[1/theta ((sum over i=1 ^n x^ _i^2- sum over i=1 ^n y_i^2 t sum over i=1 ^n Y_i ^2 theta
Read more -
Chapter 9: Problem 72 Mathematical Statistics with Applications 7
Problem 72E If Y1, Y2, . . . , Yn denote a random sample from the normal distribution with mean ? and variance ? 2, find the method-of-moments estimators of ? and ? 2.
Read more -
Chapter 9: Problem 73 Mathematical Statistics with Applications 7
Problem 73E An urn contains ? black balls and N ? ? white balls. A sample of n balls is to be selected without replacement. Let Y denote the number of black balls in the sample. Show that (N /n)Y is the method-of-moments estimator of ? .
Read more -
Chapter 9: Problem 74 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) constitute a random sample from the probability density function given by \(f(y \mid \theta)=\left\{\begin{array}{ll}\left(\frac{2}{\theta^{2}}\right)(\theta-y), & 0 \leq y \leq \theta \\0 & \text { elsewhere }\end{array}\right.\) a Find an estimator for \(\theta\) by using the method of moments. b Is this estimator a sufficient statistic for \(\theta\) Equation Transcription: Text Transcription: Y_1, Y_2, …., Y_n f(y | theta) = (frac{2}{theta^{2}})(theta - y), & 0 leq y leq theta 0 & elsewhere }. theta theta
Read more -
Chapter 9: Problem 76 Mathematical Statistics with Applications 7
Problem 76E Let X1, X2, X3, . . . be independent Bernoulli random variables such that P( Xi = 1) = p and P( Xi = 0) = 1 ? p for each i = 1, 2, 3, . . . . Let the random variable Y denote the number of trials necessary to obtain the first success—that is, the value of i for which Xi = 1 first occurs. Then Y has a geometric distribution with P(Y = y) = (1 ? p)y?1 p, for y = 1, 2, 3, . . . . Find the method-of-moments estimator of p based on this single observation Y .
Read more -
Chapter 9: Problem 71 Mathematical Statistics with Applications 7
Problem 71E If Y1, Y2, . . . , Yn denote a random sample from the normal distribution with known mean ? = 0 and unknown variance ? 2, find the method-of-moments estimator of ? 2.
Read more -
Chapter 9: Problem 75 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) be a random sample from the probability density function given by \(f(y \mid \theta)=\left\{\begin{array}{ll}\frac{\Gamma(2\theta)}{[\Gamma(\theta)]^{2}}(1-y)^{\theta-1}, & 0\leq y \leq 1 \\0, & \text { elsewhere }\end{array}\right.\) Find the method-of-moments estimator for \(\theta\). Equation Transcription: Text Transcription: Y_1, Y_2, …., Y_n f(y | theta) = {frac{Gamma(2 theta)}{[Gamma(theta)]^{2}}(1-y)^{theta-1}, & leq y leq 1 0 & elsewhere }. theta
Read more -
Chapter 9: Problem 77 Mathematical Statistics with Applications 7
Problem 77E Let Y1, Y2, . . . , Yn denote independent and identically distributed uniform random variables on the interval (0, 3?). Derive the method-of-moments estimator for ? .
Read more -
Chapter 9: Problem 78 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote independent and identically distributed random variables from a power family distribution with parameters \(\alpha \text { and } \theta=3\). Then, as in Exercise , if \(\alpha>0\), \(f(y \mid \alpha)=\left\{\begin{array}{ll} \alpha y^{a-1 / 3^{a}}, & 0 \leq y \leq 3 \\ 0, & \text { elsewhere } \end{array}\right.\) Show that \(E\left(Y_{1}\right)=3 \alpha /(\alpha+1)\) and derive the method-of-moments estimator for \(\alpha\). Equation Transcription: { Text Transcription: Y1, Y2,...,Yn \alpha and \theta=3 \alpha>0 f(y| \alpha)={\alpha y^0-1 / 3^\alpha, & 0 \leq y \leq 3 0, elsewhere E\left(Y_1\right)=3 \alpha /(\alpha+1) \alpha
Read more -
Chapter 9: Problem 79 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote independent and identically distributed random variables from a Pareto distribution with parameters \(\alpha\) and \(\beta\), where \(\beta\) is known. Then, if \(\alpha>0\), \(f(y \mid \alpha, \beta)=\left\{\begin{array}{ll}\alpha \beta^{\alpha} y^{-(\alpha+1)}, & y \geq \beta \\0, & \text { elsewhere }\end{array}\right.\) Show that \(E\left(Y_{i}\right)=\alpha \beta /(\alpha-1)\) if \(\alpha>1\) and \(E\left(Y_{i}\right)\) is undefined if \(0<\alpha<\). Thus, the method-of-moments estimator for \(\alpha\) is undefined. Equation Transcription: Text Transcription: Y_1, Y_2, …., Y_n alpha beta beta alpha > 0 f(y | alpha, beta) = {alpha beta^{alpha} y^{-(alpha+1)}, & y geq beta 0, & elsewhere. E(Y_i) = alpha beta / (alpha-1) alpha > 1 E(Y_i) 0 < alpha < alpha
Read more -
Chapter 9: Problem 80 Mathematical Statistics with Applications 7
Suppose that denote a random sample from the Poisson distribution with mean a Find the \(M L E \lambda^{\wedge}\) for \(\lambda\). b Find the expected value and variance of \(\lambda^{\wedge}\) c Show that the estimator of part (a) is consistent for \(\lambda\). d What is the MLE for \(P(Y=0)=e^{-\lambda n}\) ? Equation Transcription: Text Transcription: Y1, Y2,...,Yn M L E \lambda^\wedge \lambda \lambda^\wedge \lambda P(Y=0)=e^-\lambda n
Read more -
Chapter 9: Problem 84 Mathematical Statistics with Applications 7
A certain type of electronic component has a lifetime (in hours) with probability density function given by \( \mathrm{f}(\mathrm{y} \mid \theta)=\left\{\begin{array}{ll} \left(\frac{1}{\theta^{2}}\right) \mathrm{ye}^{-\mathrm{y} / \theta}, & \mathrm{y}>0 \\ 0, & \text { otherwise } \end{array}\right.\) That is, has a gamma distribution with parameters \(\alpha=2\) and \(\theta\). Let \(\hat{\theta}\) denote the MLE of \(\theta\). Suppose that three such components, tested independently, had lifetimes of 120,130 , and 128 hours. a Find the of \(\theta\). b Find \(E(\widehat{\theta})\) and \(V(\hat{\theta})\). c Suppose that \(\theta\) actually equals 130 . Give an approximate bound that you might expect for the error of estimation. d What is the MLE for the variance of Equation Transcription: { Text Transcription: f(y \mid \theta)={(1 \theta^2) ye^-\y / \theta, y >0 0, otherwise \alpha=2 \theta \hat\theta \theta \theta E(\widehat\theta) V(\hat\theta)
Read more -
Chapter 9: Problem 85 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample from the density function given by \(f(y \mid \alpha, \theta)=\left\{\begin{array}{ll}\left(\frac{1}{\Gamma(\alpha) \theta^{\alpha}}\right) y^{a-1} e^{-y / \theta}, & y>0 \\0, & \text { elsewhere }\end{array}\right.\) where \(\alpha>0\) is known. a Find the MLE \(\hat{\theta}\) of \(\theta\). b Find the expected value and variance of \(\hat{\theta}\). c Show that \(\hat{\theta}\) is consistent for \(\theta\). d What is the best (minimal) sufficient statistic for \(\theta\) in this problem? e Suppose that \(n=5\) and \(\alpha=2\). Use the minimal sufficient statistic to construct a 90% confidence interval for \(\theta\) [ Hint: Transform to a \(\chi^{2}\) distribution.] Equation Transcription: Text Transcription: Y_1, Y_2, …., Y_n f(y | alpha, theta) = {(frac{1}{Gamma(alpha) theta^{alpha}) y^{a-1} e^{-y/theta}, & y > 0 0, & elsewhere, alpha > 0 hat{theta} theta hat{theta} hat{theta} theta theta n = 5 alpha = 2 theta
Read more -
Chapter 9: Problem 86 Mathematical Statistics with Applications 7
Problem 86E Suppose that X1, X2, . . . , Xm, representing yields per acre for corn variety A, constitute a random sample from a normal distribution with mean ?1 and variance ? 2. Also, Y1 Y2, . . . , Yn, representing yields for corn variety B, constitute a random sample from a normal distribution with mean ?2 and variance ? 2. If the X ’s and Y ’s are independent, find the MLE for the common variance ? 2. Assume that ?1 and ?2 are unknown.
Read more -
Chapter 9: Problem 81 Mathematical Statistics with Applications 7
Suppose that \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample from an exponentially distributed population with mean \(\theta\). Find the MLE of the population variance \(\theta^{2}\). [Hint: Recall Example 9.9.] Equation Transcription: Text Transcription: Y_1, Y_2, …., Y_n theta theta^2
Read more -
Chapter 9: Problem 83 Mathematical Statistics with Applications 7
Suppose that \(Y_{1}, Y_{2}, \ldots, Y_{n}\) constitute a random sample from a uniform distribution with probability density function \(f(y \mid \theta)=\left\{\begin{array}{ll}\frac{1}{2 \theta+1}, & 0 \leq y 2 \theta+1 \\0, & \text { otherwise }\end{array}\right.\) a Obtain the MLE of \(\theta\). b Obtain the MLE for the variance of the underlying distribution. Equation Transcription: Text Transcription: Y_1, Y_2, …., Y_n f(y | theta) = {frac{1}{2 theta+1}, & 0 leq y 2 theta+1 0, & otherwise. theta
Read more -
Chapter 9: Problem 82 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample from the density function given by \(f(y \mid \theta)=\left\{\begin{array}{ll} \left(\frac{1}{\theta}\right) r y^{r-1} e^{-y^{r} / \theta}, & \theta>0, y>0 \\ 0, & \text { elsewhere } \end{array}\right.\) where is a known positive constant. a Find a sufficient statistic for \(\theta\). b Find the MLE of \(\theta\). c Is the estimator in part (b) an MVUE for \(\theta\) ? Equation Transcription: { Text Transcription: Y1, Y2,...,Yn f(y \mid \theta)={1\theta\right) r y^r-1 e^-y^r / \theta, & \theta>0, y>0 0, elsewhere \theta \theta \theta
Read more -
Chapter 9: Problem 87 Mathematical Statistics with Applications 7
Problem 87E A random sample of 100 voters selected from a large population revealed 30 favoring candidate A, 38 favoring candidate B, and 32 favoring candidate C. Find MLEs for the proportions of voters in the population favoring candidates A, B, and C, respectively. Estimate the difference between the fractions favoring A and B and place a 2-standard-deviation bound on the error of estimation.
Read more -
Chapter 9: Problem 88 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) denote a random sample from the probability density function \(f(y \mid \theta)=\left\{\begin{array}{ll} (\theta+1) y^{9}, & 0<y<1, \theta>-1 \\ 0, & \text { elsewhere } \end{array}\right.\) Find the MLE for \(\theta\). Compare your answer to the method-of-moments estimator found in Exercise Equation Transcription: { Text Transcription: Y1, Y2,...,Yn \(f(y \mid \theta)=(\theta+1) y^9, & 0<y<1, \theta>-1 0, elsewhere \theta
Read more -
Chapter 9: Problem 89 Mathematical Statistics with Applications 7
Problem 89E It is known that the probability p of tossing heads on an unbalanced coin is either 1/4 or 3/4. The coin is tossed twice and a value for Y , the number of heads, is observed. For each possible value of Y, which of the two values for p (1/4 or 3/4) maximizes the probability that Y = y? Depending on the value of y actually observed, what is the MLE of p?
Read more -
Chapter 9: Problem 91 Mathematical Statistics with Applications 7
Problem 91E Find the MLE of ? based on a random sample of size n from a uniform distribution on the interval (0, 2?).
Read more -
Chapter 9: Problem 90 Mathematical Statistics with Applications 7
Problem 90E A random sample of 100 men produced a total of 25 who favored a controversial local issue. An independent random sample of 100 women produced a total of 30 who favored the issue. Assume that pM is the true underlying proportion of men who favor the issue and that pW is the true underlying proportion of women who favor of the issue. If it actually is true that pW = pM = p, find the MLE of the common proportion p.
Read more -
Chapter 9: Problem 93 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) be a random sample from a population with density function \(f(y \mid \theta)=\left\{\begin{array}{ll}\frac{2 \theta^{2}}{y^{3}}, & \theta<y<\infty \\0, & \text { elsewhere }\end{array}\right.\) In Exercise 9.53, you showed that \(Y_{(1)}=\min \left(Y_{1}, Y_{2}, \ldots, Y_{n}\right)\) is sufficient for \(\theta\). a Find the MLE for \(\theta\). [Hint: See Example 9.16.] b Find a function of the MLE in part (a) that is a pivotal quantity. c Use the pivotal quantity from part (b) to find a 100( 1 - \(\alpha\))% confidence interval for \(\theta\). Equation Transcription: = min Text Transcription: Y_1, Y_2, …., Y_n f(y | theta) = {frac{2 theta^{2}}{y^{3}}, & theta < y < infty 0, & elsewhere. Y_(1) = min (Y_1, Y_2, …., Y_n) theta theta alpha theta
Read more -
Chapter 9: Problem 94 Mathematical Statistics with Applications 7
Suppose that \(\widehat{\theta}\) is the MLE for a parameter \(\theta\). Let \(t(\theta)\) be a function of \(\theta\) that possesses a unique inverse [that is, if \(\beta=t(\theta)\), then \(\left.\theta=t^{-1}(\beta)\right]\). Show that \(t(\widehat{\theta})\) is the MLE of \(t(\theta)\) Equation Transcription: Text Transcription: \widehat\theta \theta t(\theta) \theta \beta=t(\theta) \theta=t^-1(\beta)\right] t(\widehat\theta) t(\theta)
Read more -
Chapter 9: Problem 95 Mathematical Statistics with Applications 7
Problem 95E A random sample of n items is selected from the large number of items produced by a certain production line in one day. Find the MLE of the ratio R, the proportion of defective items divided by the proportion of good items.
Read more -
Chapter 9: Problem 92 Mathematical Statistics with Applications 7
Let \(Y_{1}, Y_{2}, \ldots, Y_{n}\) be a random sample from a population with density function \(f(y \mid \theta)=\left\{\begin{array}{ll}\frac{3 y^{2}}{\theta^{3}}, & 0 \leq y \leq \theta \\0, & \text { elsewhere. }\end{array}\right.\) In Exercise 9.52, you showed that \(Y_{(n)}=\max \left(Y_{1}, Y_{2}, \ldots, Y_{n}\right)\) is sufficient for \(\theta\). a Find the MLE for \(\theta\). [Hint: See Example 9.16.] b Find a function of the MLE in part (a) that is a pivotal quantity. [ Hint: see Exercise 9.63.] c Use the pivotal quantity from part (b) to find a 100( 1 - \(\alpha\))% confidence interval for \(\theta\). Equation Transcription: = max Text Transcription: Y_1, Y_2, …., Y_n f(y | theta) = {frac{3 y^{2}}{theta^{3}}, & 0 leq y leq theta 0, & elsewhere. Y_(n) = max (Y_1, Y_2, …., Y_n) theta theta alpha theta
Read more -
Chapter 9: Problem 96 Mathematical Statistics with Applications 7
Problem 96E Consider a random sample of size n from a normal population with mean ? and variance ? 2, both unknown. Derive the MLE of ? .
Read more -
Chapter 9: Problem 97 Mathematical Statistics with Applications 7
The geometric probability mass function is given by \(p(y \mid p)=p(1-p)^{y-1}, y=1,2,3, \ldots\) A random sample of size \(n\) is taken from a population with a geometric distribution. a Find the method-of-moments estimator for \(p\) b Find the MLE for \(p\) Equation Transcription: Text Transcription: p(y \mid p)=p(1-p)^y-1, y=1,2,3, \ldots n p p
Read more -
Chapter 9: Problem 98 Mathematical Statistics with Applications 7
Refer to Exercise . What is the approximate variance of the MLE?
Read more -
Chapter 9: Problem 100 Mathematical Statistics with Applications 7
Problem 100E Suppose that Y1, Y2, . . . , Yn constitute a random sample of size n from an exponential distribution with mean ?. Find a 100(1 ? ?)% confidence interval for t (?) = ? 2.
Read more -
Chapter 9: Problem 99 Mathematical Statistics with Applications 7
Consider the distribution discussed in Example . Use the method presented in Section to derive a \(100(1-\alpha) \%\) confidence interval for \(t(p)=p\). Is the resulting interval familiar to you? Equation Transcription: Text Transcription: 100(1-\alpha) \% t(p)=p
Read more -
Chapter 9: Problem 103 Mathematical Statistics with Applications 7
A random sample of size is taken from a population with a Rayleigh distribution. As in Exercise , the Rayleigh density function is \(\mathrm{f}(\mathrm{y})=\left\{\begin{array}{ll} \left(\frac{2 \mathrm{y}}{\theta}\right) \mathrm{e}^{-\mathrm{y}^{2} / \theta, \theta} & \mathrm{y}>0 \\ 0, & \text { elsewhere } \end{array}\right.\) a Find the of \(\theta\) b Find the approximate variance of the MLE obtained in part (a). Equation Transcription: { Text Transcription: f (y)={2 y\theta\right) e^-y^2 / \theta, \theta & y>0 0, elsewhere \theta
Read more -
Chapter 9: Problem 104 Mathematical Statistics with Applications 7
Suppose that \(Y_{1}, Y_{2}, \ldots, Y_{n}\) constitute a random sample from the density function \(f(y \mid \theta)=\left\{\begin{array}{ll} e^{-(y-\theta)}, & y>\theta \\ 0, & \text { elsewhere } \end{array}\right.\) where \(\theta\) is an unknown, positive constant. a Find an estimator \(\widehat{\theta}_{1}\) for \(\theta\) by the method of moments. b Find an estimator \(\widehat{\theta}_{2}\) for \(\theta\) by the method of maximum likelihood. c Adjust \(\widehat{\theta}_{1} \text { and } \widehat{\theta}_{2}\) so that they are unbiased. Find the efficiency of the adjusted \(\widehat{\theta}_{1}\) relative to the adjusted \(\widehat{\theta}_{2}\). Equation Transcription: { Text Transcription: Y1, Y2,...,Yn f(y \mid \theta)={e^-(y-\theta), & y>\theta \\ 0, elsewhere \theta \widehat\theta}_1 \theta \widehat\theta}_2 \theta \widehat\theta_1 and \widehat\theta_2 \widehat\theta}_1 \widehat\theta}_2
Read more -
Chapter 9: Problem 101 Mathematical Statistics with Applications 7
Problem 101E Let Y1, Y2, . . . , Yn denote a random sample of size n from a Poisson distribution with mean ?. Find a 100(1 ? ?)% confidence interval for t (?) = e?? = P(Y = 0).
Read more -
Chapter 9: Problem 102 Mathematical Statistics with Applications 7
Refer to Exercises and . If a sample of size 30 yields \(\bar{y}=4.4\), find a \(95 \%\) confidence interval for . Equation Transcription: Text Transcription: \bar y=4.4 95%
Read more -
Chapter 9: Problem 105 Mathematical Statistics with Applications 7
Refer to Exercise . Under the conditions outlined there, find the MLE of \(\sigma^{2}\) Equation Transcription: Text Transcription: \sigma^2
Read more -
Chapter 9: Problem 107 Mathematical Statistics with Applications 7
Suppose that a random sample of length-of-life measurements, \(Y_{1}, Y_{2}, \ldots, Y_{n}\), is to be taken of components whose length of life has an exponential distribution with mean \(\theta\). It is frequently of interest to estimate \(F^{-}(t)=1-F(t)=e^{-t, \theta}\) the reliability at time of such a component. For any fixed value of , find the MLE of \(F^{-}(t)\) Equation Transcription: Text Transcription: Y1, Y2,...,Yn \theta F^-(t)=1-F(t)=e^-t, \theta F^-(t)
Read more -
Chapter 9: Problem 106 Mathematical Statistics with Applications 7
Problem 106SE Suppose that Y1, Y2, . . . , Yn denote a random sample from a Poisson distribution with mean ?. Find the MVUE of P(Yi = 0) = e??. [Hint: Make use of the Rao–Blackwell theorem.]
Read more -
Chapter 9: Problem 109 Mathematical Statistics with Applications 7
Suppose that integers are drawn at random and with replacement from the integers \(1,2, \ldots, N\) That is, each sampled integer has probability \(1 / N\) of taking on any of the values \(1,2, \ldots, N\) and the sampled values are independent. a Find the method-of-moments estimator \(\widehat{N}_{1}\) of \(N\). b Find \(E\left(\hat{N}_{1}\right) \text { and } V\left(\hat{N}_{1}\right)\) Equation Transcription: Text Transcription: 1,2,...,N 1/N 1,2,...,N \widehat{N}_{1} N E\left(\hat N_1\right)and V\left(\hat N_1\right)
Read more -
Chapter 9: Problem 110 Mathematical Statistics with Applications 7
Refer to Exercise 9.109. a Find the MLE \(\hat{N}_{2}\) of \(N\). b Show that \(E\left(\hat{N}_{2}\right))\) is approximately \([n /(n+1)] N\). Adjust \(\hat{N}_{2}\) to form an estimator \(\hat{N}_{3}\) that is approximately unbiased for \(N\). c Find an approximate variance for \(\hat{N}_{3}\) by using the fact that for large \(N\) the variance of the largest sampled integer is approximately \(\frac{n N^{2}}{(n+1)^{2}(n+2)}\) d Show that for large \(N\) and \(n>1, V\left(\widehat{N}_{3}\right)<V\left(\widehat{N}_{1}\right)\). Equation Transcription: Text Transcription: Hat N_2 N E (hat N_2) [n /(n+1)] N Hat N_2 Hat N_3 N Hat N_3 N n N^2/(n+1)^2(n+2) N and n>1, V(widehatN_3)<V(widehatN_1)
Read more -
Chapter 9: Problem 108 Mathematical Statistics with Applications 7
The MLE obtained in Exercise is a function of the minimal sufficient statistic for \(\theta\), but it is not unbiased. Use the Rao-Blackwell theorem to find the MVUE of \(e^{-l / \theta}\) by the following steps. a Let \(V=\left\{\begin{array}{ll} 1, & Y_{1}>t \\ 0, & \text { elsewhere } \end{array}\right.\) Show that \(V\) is an unbiased estimator of \(e^{-f / \theta}\) b Because \(U=\sum_{i=1}^{n} Y_{i}\) is the minimal sufficient statistic for \(\theta\), show that the conditional density function for \(Y_{1}\), given \(U=u\), is \(f_{Y_{1} \mid U}\left(y_{1} \mid u\right)=\left\{\begin{array}{ll} \left(\frac{n-1}{u^{n-1}}\right)\left(u-y_{1}\right)^{n-2}, & 0<y_{1}<u \\ 0, & \text { elsewhere } \end{array}\right.\) c Show that \(E(V \mid U)=P\left(Y_{1}>t \mid U\right)=\left(1-\frac{t}{U}\right)^{n-1}\) This is the MVUE of \(e^{-t / \theta}\) by the Rao-Blackwell theorem and by the fact that the density function for \(U\) is complete. Equation Transcription: { { Text Transcription: \theta e^{-l / \theta} V={ 1, & Y_1>t \\ 0, elsewhere V e^-f / \theta U=\sum_{i=1}^{n} Y_{i} \theta Y_{1} U=u f_Y_1 \mid U(y_1 \mid u\right)={n-1u^n-1\right)\left(u-y_1\right)^n-2, & 0<y_1<u \\ 0, elsewhere E(V \mid U)=P(Y_1>t \mid U)=(1- t U)^n-1 e^-t / \theta U
Read more -
Chapter 9: Problem 111 Mathematical Statistics with Applications 7
Refer to Exercise Suppose that enemy tanks have serial numbers \(1,2, \ldots, N\). A spy randomly observed five tanks (with replacement) with serial numbers , and 57. Estimate and place a bound on the error of estimation. Equation Transcription: Text Transcription: 1,2,...,N
Read more -
Chapter : Problem 43 Mathematical Statistics with Applications 7
Let Y1, Y2, . . . , Yn denote independent and identically distributed random variables from a power family distribution with parameters ? and ?. Then, by the result in Exercise 6.17, if ?, ? > 0, \(f\left(y \mid \alpha, \theta=\left\{\begin{array}{l} \alpha y^{\alpha-1} / \theta^{\alpha} \cdot 0 \leq y \leq \theta \\ 0, \end{array}\right.\right. \) \(\prod_{i=1}^{n} y_{i}\) Equation transcription: Text transcription: f\left(y \mid \alpha, \theta=\left\{\begin{array}{l} \alpha y^{\alpha-1} / \theta^{\alpha} \cdot 0 \leq y \leq \theta \\ 0, \end{array}\right.\right.
Read more