- Chapter 10: Statistical Inference for Two Samples
- Chapter 11: Simple Linear Regression and Correlation
- Chapter 12: Multiple Linear Regression
- Chapter 13: Design and Analysis of Single-Factor Experiments: The Analysis of Variance
- Chapter 14: Design of Experiments with Several Factors
- Chapter 15: Statistical Quality Control
- Chapter 2: Probability
- Chapter 3: Discrete Random Variables and Probability Distributions
- Chapter 4: Continuous Random Variables and Probability Distributions
- Chapter 5: Joint Probability Distributions
- Chapter 6: Descriptive Statistics
- Chapter 7: Sampling Distributions and Point Estimation of Parameters
- Chapter 8: Statistical Intervals for a Single Sample
- Chapter 9: Tests of Hypotheses for a Single Sample
Applied Statistics and Probability for Engineers 5th Edition - Solutions by Chapter
Full solutions for Applied Statistics and Probability for Engineers | 5th Edition
Applied Statistics and Probability for Engineers | 5th Edition - Solutions by ChapterGet Full Solutions
2 k p - factorial experiment
A fractional factorial experiment with k factors tested in a 2 ? p fraction with all factors tested at only two levels (settings) each
Analysis of variance (ANOVA)
A method of decomposing the total variability in a set of observations, as measured by the sum of the squares of these observations from their average, into component sums of squares that are associated with speciic deined sources of variation
A qualitative characteristic of an item or unit, usually arising in quality control. For example, classifying production units as defective or nondefective results in attributes data.
Attribute control chart
Any control chart for a discrete random variable. See Variables control chart.
See Arithmetic mean.
Sequences of independent trials with only two outcomes, generally called “success” and “failure,” in which the probability of success remains constant.
An effect that systematically distorts a statistical result or estimate, preventing it from representing the true quantity of interest.
When y fx = ( ) and y is considered to be caused by x, x is sometimes called a causal variable
Components of variance
The individual components of the total variance that are attributable to speciic sources. This usually refers to the individual variance components arising from a random or mixed model analysis of variance.
An estimator that converges in probability to the true value of the estimated parameter as the sample size increases.
A tabular arrangement expressing the assignment of members of a data set according to two or more categories or classiication criteria
A term used for the quantity ( / )( ) 1 1 2 n xi i n ? = that is subtracted from xi i n 2 ? =1 to give the corrected sum of squares deined as (/ ) ( ) 1 1 2 n xx i x i n ? = i ? . The correction factor can also be written as nx 2 .
Cumulative normal distribution function
The cumulative distribution of the standard normal distribution, often denoted as ?( ) x and tabulated in Appendix Table II.
Distribution free method(s)
Any method of inference (hypothesis testing or conidence interval construction) that does not depend on the form of the underlying distribution of the observations. Sometimes called nonparametric method(s).
A model to relate a response to one or more regressors or factors that is developed from data obtained from the system.
Error sum of squares
In analysis of variance, this is the portion of total variability that is due to the random component in the data. It is usually based on replication of observations at certain treatment combinations in the experiment. It is sometimes called the residual sum of squares, although this is really a better term to use only when the sum of squares is based on the remnants of a model-itting process and not on replication.
The variance of an error term or component in a model.
The expected value of a random variable X is its long-term average or mean value. In the continuous case, the expected value of X is E X xf x dx ( ) = ?? ( ) ? ? where f ( ) x is the density function of the random variable X.
Goodness of fit
In general, the agreement of a set of observed values and a set of theoretical values that depend on some hypothesis. The term is often used in itting a theoretical distribution to a set of observations.
The harmonic mean of a set of data values is the reciprocal of the arithmetic mean of the reciprocals of the data values; that is, h n x i n i = ? ? ? ? ? = ? ? 1 1 1 1 g .