- 11.9.1: Consider the data in Example 11.2.2 on page 703. Suppose that we fi...
- 11.9.2: Suppose that(Xi, Yi), i = 1,...,n, form a random sample of size n f...
- 11.9.3: Suppose that (Xi, Yi), i = 1,...,n, form a random sample of size n ...
- 11.9.4: Let 1, 2, and 3 denote the unknown angles of a triangle, measured i...
- 11.9.5: Suppose that a straight line is to be fitted to n points (x1, y1), ...
- 11.9.6: Suppose that a least-squares line is fitted to the n points (x1, y1...
- 11.9.7: Suppose that a straight line y = 1 + 2x is to be fitted to the n po...
- 11.9.8: Suppose that twin sisters are each to take a certain mathematics ex...
- 11.9.9: Suppose that a sample of n observations is formed from k subsamples...
- 11.9.10: Consider the linear regression model Yi = 1wi + 2xi + i for i = 1, ...
- 11.9.11: Determine an unbiased estimator of 2 in a two-way layout with K obs...
- 11.9.12: In a two-way layout with one observation in each cell, construct a ...
- 11.9.13: In a two-way layout with K observations in each cell (K 2), constru...
- 11.9.14: Suppose that each of two different varieties of corn is treated wit...
- 11.9.15: Suppose that W1, W2, and W3 are independent random variables, each ...
- 11.9.16: Suppose that it is desired to fit a curve of the form y = x to a gi...
- 11.9.17: Consider a problem of simple linear regression, and let ei = Yi 0 1...
- 11.9.18: Consider a general linear model with n p design matrix Z, and let W...
- 11.9.19: Consider a two-way layout in which the effects of the factors are a...
- 11.9.20: Consider a two-way layout in which the effects of the factors are a...
- 11.9.21: Consider again the conditions of Exercises 19 and 20, and let the e...
- 11.9.22: Consider again the conditions of Exercise 19 and 20, and suppose th...
- 11.9.23: In a three-way layout with one observation in each cell, the observ...
- 11.9.24: The 2000 U.S. presidential election was very close, especially in t...
Solutions for Chapter 11.9: Linear Statistical Models
Full solutions for Probability and Statistics | 4th Edition
`-error (or `-risk)
In hypothesis testing, an error incurred by rejecting a null hypothesis when it is actually true (also called a type I error).
In hypothesis testing, a region in the sample space of the test statistic such that if the test statistic falls within it, the null hypothesis cannot be rejected. This terminology is used because rejection of H0 is always a strong conclusion and acceptance of H0 is generally a weak conclusion
A study in which a sample from a population is used to make inference to a future population. Stability needs to be assumed. See Enumerative study
Asymptotic relative eficiency (ARE)
Used to compare hypothesis tests. The ARE of one test relative to another is the limiting ratio of the sample sizes necessary to obtain identical error probabilities for the two procedures.
A qualitative characteristic of an item or unit, usually arising in quality control. For example, classifying production units as defective or nondefective results in attributes data.
Axioms of probability
A set of rules that probabilities deined on a sample space must follow. See Probability
In experimental design, a group of experimental units or material that is relatively homogeneous. The purpose of dividing experimental units into blocks is to produce an experimental design wherein variability within blocks is smaller than variability between blocks. This allows the factors of interest to be compared in an environment that has less variability than in an unblocked experiment.
Central composite design (CCD)
A second-order response surface design in k variables consisting of a two-level factorial, 2k axial runs, and one or more center points. The two-level factorial portion of a CCD can be a fractional factorial design when k is large. The CCD is the most widely used design for itting a second-order model.
Another term for the conidence coeficient.
A dimensionless measure of the linear association between two variables, usually lying in the interval from ?1 to +1, with zero indicating the absence of correlation (but not necessarily the independence of the two variables).
A square matrix that contains the variances and covariances among a set of random variables, say, X1 , X X 2 k , , … . The main diagonal elements of the matrix are the variances of the random variables and the off-diagonal elements are the covariances between Xi and Xj . Also called the variance-covariance matrix. When the random variables are standardized to have unit variances, the covariance matrix becomes the correlation matrix.
A parameter in a tabular CUSUM algorithm that is determined from a trade-off between false alarms and the detection of assignable causes.
Degrees of freedom.
The number of independent comparisons that can be made among the elements of a sample. The term is analogous to the number of degrees of freedom for an object in a dynamic system, which is the number of independent coordinates required to determine the motion of the object.
W. Edwards Deming (1900–1993) was a leader in the use of statistical quality control.
The amount of variability exhibited by data
Distribution free method(s)
Any method of inference (hypothesis testing or conidence interval construction) that does not depend on the form of the underlying distribution of the observations. Sometimes called nonparametric method(s).
Error of estimation
The difference between an estimated value and the true value.
Estimator (or point estimator)
A procedure for producing an estimate of a parameter of interest. An estimator is usually a function of only sample data values, and when these data values are available, it results in an estimate of the parameter of interest.
A function used in the probability density function of a gamma random variable that can be considered to extend factorials
A function that is used to determine properties of the probability distribution of a random variable. See Moment-generating function