- 7.2.1: Consider again the situation described in Example 7.2.8. This time,...
- 7.2.2: Suppose that the proportion of defective items in a large manufactu...
- 7.2.3: Suppose that the number of defects on a roll of magnetic recording ...
- 7.2.4: Suppose that the prior distribution of some parameter is a gamma di...
- 7.2.5: Suppose that the prior distribution of some parameter is a beta dis...
- 7.2.6: Suppose that the proportion of defective items in a large manufactu...
- 7.2.7: Consider again the problem described in Exercise 6, but suppose now...
- 7.2.8: Suppose that X1,...,Xn form a random sample from a distribution for...
- 7.2.9: Consider again the problem described in Exercise 6, and assume the ...
- 7.2.10: Suppose that a single observation X is to be taken from the uniform...
- 7.2.11: Consider again the conditions of Exercise 10, and assume the same p...
Solutions for Chapter 7.2: Estimation
Full solutions for Probability and Statistics | 4th Edition
Additivity property of x 2
If two independent random variables X1 and X2 are distributed as chi-square with v1 and v2 degrees of freedom, respectively, Y = + X X 1 2 is a chi-square random variable with u = + v v 1 2 degrees of freedom. This generalizes to any number of independent chi-square random variables.
Analysis of variance (ANOVA)
A method of decomposing the total variability in a set of observations, as measured by the sum of the squares of these observations from their average, into component sums of squares that are associated with speciic deined sources of variation
A study in which a sample from a population is used to make inference to a future population. Stability needs to be assumed. See Enumerative study
An estimator for a parameter obtained from a Bayesian method that uses a prior distribution for the parameter along with the conditional distribution of the data given the parameter to obtain the posterior distribution of the parameter. The estimator is obtained from the posterior distribution.
Box plot (or box and whisker plot)
A graphical display of data in which the box contains the middle 50% of the data (the interquartile range) with the median dividing it, and the whiskers extend to the smallest and largest values (or some deined lower and upper limits).
Central limit theorem
The simplest form of the central limit theorem states that the sum of n independently distributed random variables will tend to be normally distributed as n becomes large. It is a necessary and suficient condition that none of the variances of the individual random variables are large in comparison to their sum. There are more general forms of the central theorem that allow ininite variances and correlated random variables, and there is a multivariate version of the theorem.
The probability of an event given that the random experiment produces an outcome in another event.
Conditional probability mass function
The probability mass function of the conditional probability distribution of a discrete random variable.
A correction factor used to improve the approximation to binomial probabilities from a normal distribution.
Continuous uniform random variable
A continuous random variable with range of a inite interval and a constant probability density function.
A linear function of treatment means with coeficients that total zero. A contrast is a summary of treatment means that is of interest in an experiment.
A graphical display used to monitor a process. It usually consists of a horizontal center line corresponding to the in-control value of the parameter that is being monitored and lower and upper control limits. The control limits are determined by statistical criteria and are not arbitrary, nor are they related to speciication limits. If sample points fall within the control limits, the process is said to be in-control, or free from assignable causes. Points beyond the control limits indicate an out-of-control process; that is, assignable causes are likely present. This signals the need to ind and remove the assignable causes.
A dimensionless measure of the linear association between two variables, usually lying in the interval from ?1 to +1, with zero indicating the absence of correlation (but not necessarily the independence of the two variables).
In hypothesis testing, this is the portion of the sample space of a test statistic that will lead to rejection of the null hypothesis.
Defects-per-unit control chart
See U chart
An experiment in which the tests are planned in advance and the plans usually incorporate statistical models. See Experiment
Error mean square
The error sum of squares divided by its number of degrees of freedom.
A property of a collection of events that indicates that their union equals the sample space.
A series of tests in which changes are made to the system under study
Fisher’s least signiicant difference (LSD) method
A series of pair-wise hypothesis tests of treatment means in an experiment to determine which means differ.