- 8.4.1: . Suppose that X has the t distribution with m degrees of freedom (...
- 8.4.2: Suppose that X1,...,Xn form a random sample from the normal distrib...
- 8.4.3: Suppose that the five random variables X1,...,X5 are i.i.d. and tha...
- 8.4.4: By using the table of the t distribution given in the back of this ...
- 8.4.5: Suppose that the random variables X1 and X2 are independent and tha...
- 8.4.6: In Example 8.2.3, suppose that we will observe n = 20 cheese chunks...
- 8.4.7: Prove the limit formula Eq. (8.4.6). Hint: Use Theorem 5.7.4.
- 8.4.8: . Let X have the standard normal distribution, and let Y have the t...
Solutions for Chapter 8.4: Sampling Distributions of Estimators
Full solutions for Probability and Statistics | 4th Edition
2 k factorial experiment.
A full factorial experiment with k factors and all factors tested at only two levels (settings) each.
`-error (or `-risk)
In hypothesis testing, an error incurred by rejecting a null hypothesis when it is actually true (also called a type I error).
In statistical hypothesis testing, this is a hypothesis other than the one that is being tested. The alternative hypothesis contains feasible conditions, whereas the null hypothesis speciies conditions that are under test
An estimator for a parameter obtained from a Bayesian method that uses a prior distribution for the parameter along with the conditional distribution of the data given the parameter to obtain the posterior distribution of the parameter. The estimator is obtained from the posterior distribution.
Sequences of independent trials with only two outcomes, generally called “success” and “failure,” in which the probability of success remains constant.
An effect that systematically distorts a statistical result or estimate, preventing it from representing the true quantity of interest.
Data consisting of counts or observations that can be classiied into categories. The categories may be descriptive.
Components of variance
The individual components of the total variance that are attributable to speciic sources. This usually refers to the individual variance components arising from a random or mixed model analysis of variance.
The mean of the conditional probability distribution of a random variable.
Conditional probability mass function
The probability mass function of the conditional probability distribution of a discrete random variable.
When a factorial experiment is run in blocks and the blocks are too small to contain a complete replicate of the experiment, one can run a fraction of the replicate in each block, but this results in losing information on some effects. These effects are linked with or confounded with the blocks. In general, when two factors are varied such that their individual effects cannot be determined separately, their effects are said to be confounded.
A correction factor used to improve the approximation to binomial probabilities from a normal distribution.
A probability distribution for a continuous random variable.
A parameter in a tabular CUSUM algorithm that is determined from a trade-off between false alarms and the detection of assignable causes.
Deming’s 14 points.
A management philosophy promoted by W. Edwards Deming that emphasizes the importance of change and quality
An experiment in which the tests are planned in advance and the plans usually incorporate statistical models. See Experiment
Another name for a cumulative distribution function.
Error of estimation
The difference between an estimated value and the true value.
Any test of signiicance involving the F distribution. The most common F-tests are (1) testing hypotheses about the variances or standard deviations of two independent normal distributions, (2) testing hypotheses about treatment means or variance components in the analysis of variance, and (3) testing signiicance of regression or tests on subsets of parameters in a regression model.
Fisher’s least signiicant difference (LSD) method
A series of pair-wise hypothesis tests of treatment means in an experiment to determine which means differ.