- 4.R4.1: Ontario Health Survey The Ministry of Health in the province of Ont...
- 4.R4.2: Bad sampling A large high school wants to gather student opinion ab...
- 4.R4.3: Drug testing A baseball team regularly conducts random drug tests o...
- 4.R4.4: Polling the faculty A researcher wants to study the attitudes of co...
- 4.R4.5: Been to the movies? An opinion poll calls 2000 randomly chosen resi...
- 4.R4.6: Are anesthetics safe? The National Halothane Study was a major inve...
- 4.R4.7: Ugly fries Few people want to eat discolored french fries. Potatoes...
- 4.R4.8: Attitudes toward homeless people Negative attitudes toward poor peo...
- 4.R4.9: An herb for depression? Does the herb SaintJohns-wort relieve major...
- 4.R4.10: Vitamin C for marathon runners An ultramarathon, as you might guess...
- 4.R4.11: How long did I work? A psychologist wants to know if the difficulty...
- 4.R4.12: Deceiving subjects Students sign up to be subjects in a psychology ...
Solutions for Chapter 4: The Practice of Statistics 4th Edition
Full solutions for The Practice of Statistics | 4th Edition
2 k p - factorial experiment
A fractional factorial experiment with k factors tested in a 2 ? p fraction with all factors tested at only two levels (settings) each
Additivity property of x 2
If two independent random variables X1 and X2 are distributed as chi-square with v1 and v2 degrees of freedom, respectively, Y = + X X 1 2 is a chi-square random variable with u = + v v 1 2 degrees of freedom. This generalizes to any number of independent chi-square random variables.
Central limit theorem
The simplest form of the central limit theorem states that the sum of n independently distributed random variables will tend to be normally distributed as n becomes large. It is a necessary and suficient condition that none of the variances of the individual random variables are large in comparison to their sum. There are more general forms of the central theorem that allow ininite variances and correlated random variables, and there is a multivariate version of the theorem.
An experiment in which the treatments (experimental conditions) that are to be studied are included in the experiment. The data from the experiment are used to evaluate the treatments.
Conditional probability distribution
The distribution of a random variable given that the random experiment produces an outcome in an event. The given event might specify values for one or more other random variables
A tabular arrangement expressing the assignment of members of a data set according to two or more categories or classiication criteria
A probability distribution for a continuous random variable.
See Control chart.
In hypothesis testing, this is the portion of the sample space of a test statistic that will lead to rejection of the null hypothesis.
A parameter in a tabular CUSUM algorithm that is determined from a trade-off between false alarms and the detection of assignable causes.
Degrees of freedom.
The number of independent comparisons that can be made among the elements of a sample. The term is analogous to the number of degrees of freedom for an object in a dynamic system, which is the number of independent coordinates required to determine the motion of the object.
Another name for a probability density function
Distribution free method(s)
Any method of inference (hypothesis testing or conidence interval construction) that does not depend on the form of the underlying distribution of the observations. Sometimes called nonparametric method(s).
Error mean square
The error sum of squares divided by its number of degrees of freedom.
Error sum of squares
In analysis of variance, this is the portion of total variability that is due to the random component in the data. It is usually based on replication of observations at certain treatment combinations in the experiment. It is sometimes called the residual sum of squares, although this is really a better term to use only when the sum of squares is based on the remnants of a model-itting process and not on replication.
A type of experimental design in which every level of one factor is tested in combination with every level of another factor. In general, in a factorial experiment, all possible combinations of factor levels are tested.
A function used in the probability density function of a gamma random variable that can be considered to extend factorials
A function that is used to determine properties of the probability distribution of a random variable. See Moment-generating function
Goodness of fit
In general, the agreement of a set of observed values and a set of theoretical values that depend on some hypothesis. The term is often used in itting a theoretical distribution to a set of observations.
In multiple regression, the matrix H XXX X = ( ) ? ? -1 . This a projection matrix that maps the vector of observed response values into a vector of itted values by yˆ = = X X X X y Hy ( ) ? ? ?1 .