- Chapter 10: Statistical Inference for Two Samples
- Chapter 11: Simple Linear Regression and Correlation
- Chapter 12: Multiple Linear Regression
- Chapter 13: Design and Analysis of Single-Factor Experiments: The Analysis of Variance
- Chapter 14: Design of Experiments with Several Factors
- Chapter 15: Statistical Quality Control
- Chapter 2: Probability
- Chapter 3: Discrete Random Variables and Probability Distributions
- Chapter 4: Continuous Random Variables and Probability Distributions
- Chapter 5: Joint Probability Distributions
- Chapter 6: Descriptive Statistics
- Chapter 7: Sampling Distributions and Point Estimation of Parameters
- Chapter 8: Statistical Intervals for a Single Sample
- Chapter 9: Tests of Hypotheses for a Single Sample
Applied Statistics and Probability for Engineers 5th Edition - Solutions by Chapter
Full solutions for Applied Statistics and Probability for Engineers | 5th Edition
Applied Statistics and Probability for Engineers | 5th Edition - Solutions by ChapterGet Full Solutions
2 k p - factorial experiment
A fractional factorial experiment with k factors tested in a 2 ? p fraction with all factors tested at only two levels (settings) each
`-error (or `-risk)
In hypothesis testing, an error incurred by rejecting a null hypothesis when it is actually true (also called a type I error).
A formula used to determine the probability of the union of two (or more) events from the probabilities of the events and their intersection(s).
An attribute control chart that plots the total number of defects per unit in a subgroup. Similar to a defects-per-unit or U chart.
A chart used to organize the various potential causes of a problem. Also called a ishbone diagram.
Central limit theorem
The simplest form of the central limit theorem states that the sum of n independently distributed random variables will tend to be normally distributed as n becomes large. It is a necessary and suficient condition that none of the variances of the individual random variables are large in comparison to their sum. There are more general forms of the central theorem that allow ininite variances and correlated random variables, and there is a multivariate version of the theorem.
Any test of signiicance based on the chi-square distribution. The most common chi-square tests are (1) testing hypotheses about the variance or standard deviation of a normal distribution and (2) testing goodness of it of a theoretical distribution to sample data
If it is possible to write a probability statement of the form PL U ( ) ? ? ? ? = ?1 where L and U are functions of only the sample data and ? is a parameter, then the interval between L and U is called a conidence interval (or a 100 1( )% ? ? conidence interval). The interpretation is that a statement that the parameter ? lies in this interval will be true 100 1( )% ? ? of the times that such a statement is made
A linear function of treatment means with coeficients that total zero. A contrast is a summary of treatment means that is of interest in an experiment.
See Control chart.
In regression, Cook’s distance is a measure of the inluence of each individual observation on the estimates of the regression model parameters. It expresses the distance that the vector of model parameter estimates with the ith observation removed lies from the vector of model parameter estimates based on all observations. Large values of Cook’s distance indicate that the observation is inluential.
Cumulative normal distribution function
The cumulative distribution of the standard normal distribution, often denoted as ?( ) x and tabulated in Appendix Table II.
Degrees of freedom.
The number of independent comparisons that can be made among the elements of a sample. The term is analogous to the number of degrees of freedom for an object in a dynamic system, which is the number of independent coordinates required to determine the motion of the object.
An analysis of how the variance of the random variable that represents that output of a system depends on the variances of the inputs. A formula exists when the output is a linear function of the inputs and the formula is simpliied if the inputs are assumed to be independent.
Error sum of squares
In analysis of variance, this is the portion of total variability that is due to the random component in the data. It is usually based on replication of observations at certain treatment combinations in the experiment. It is sometimes called the residual sum of squares, although this is really a better term to use only when the sum of squares is based on the remnants of a model-itting process and not on replication.
Estimate (or point estimate)
The numerical value of a point estimator.
The distribution of the random variable deined as the ratio of two independent chi-square random variables, each divided by its number of degrees of freedom.
A type of experimental design in which every level of one factor is tested in combination with every level of another factor. In general, in a factorial experiment, all possible combinations of factor levels are tested.
A function that is used to determine properties of the probability distribution of a random variable. See Moment-generating function
The geometric mean of a set of n positive data values is the nth root of the product of the data values; that is, g x i n i n = ( ) = / w 1 1 .