- Chapter 1: Overview and Descriptive Statistics
- Chapter 10: The Analysis of Variance
- Chapter 11: Multifactor Analysis of Variance
- Chapter 12: Simple Linear Regression and Correlation
- Chapter 13: Nonlinear and Multiple Regression
- Chapter 14: Goodness-of-Fit Tests and Categorical Data Analysis
- Chapter 15: Distribution-Free Procedures
- Chapter 16: Quality Control Methods
- Chapter 2: Probability
- Chapter 3: Discrete Random Variables and Probability Distributions
- Chapter 4: Continuous Random Variables and Probability Distributions
- Chapter 5: Joint Probability Distributions and Random Samples
- Chapter 6: Point Estimation
- Chapter 7: Statistical Intervals Based on a Single Sample
- Chapter 8: Tests of Hypotheses Based on a Single Sample
- Chapter 9: Inferences Based on Two Samples
Probability and Statistics for Engineering and the Sciences 8th Edition - Solutions by Chapter
Full solutions for Probability and Statistics for Engineering and the Sciences | 8th Edition
Probability and Statistics for Engineering and the Sciences | 8th Edition - Solutions by ChapterGet Full Solutions
An equation for a conditional probability such as PA B ( | ) in terms of the reverse conditional probability PB A ( | ).
Bivariate normal distribution
The joint distribution of two normal random variables
Components of variance
The individual components of the total variance that are attributable to speciic sources. This usually refers to the individual variance components arising from a random or mixed model analysis of variance.
The probability of an event given that the random experiment produces an outcome in another event.
Continuous random variable.
A random variable with an interval (either inite or ininite) of real numbers for its range.
A graphical display used to monitor a process. It usually consists of a horizontal center line corresponding to the in-control value of the parameter that is being monitored and lower and upper control limits. The control limits are determined by statistical criteria and are not arbitrary, nor are they related to speciication limits. If sample points fall within the control limits, the process is said to be in-control, or free from assignable causes. Points beyond the control limits indicate an out-of-control process; that is, assignable causes are likely present. This signals the need to ind and remove the assignable causes.
In the most general usage, a measure of the interdependence among data. The concept may include more than two variables. The term is most commonly used in a narrow sense to express the relationship between quantitative variables or ranks.
A square matrix that contains the variances and covariances among a set of random variables, say, X1 , X X 2 k , , … . The main diagonal elements of the matrix are the variances of the random variables and the off-diagonal elements are the covariances between Xi and Xj . Also called the variance-covariance matrix. When the random variables are standardized to have unit variances, the covariance matrix becomes the correlation matrix.
In hypothesis testing, this is the portion of the sample space of a test statistic that will lead to rejection of the null hypothesis.
Cumulative normal distribution function
The cumulative distribution of the standard normal distribution, often denoted as ?( ) x and tabulated in Appendix Table II.
A parameter in a tabular CUSUM algorithm that is determined from a trade-off between false alarms and the detection of assignable causes.
Defects-per-unit control chart
See U chart
Degrees of freedom.
The number of independent comparisons that can be made among the elements of a sample. The term is analogous to the number of degrees of freedom for an object in a dynamic system, which is the number of independent coordinates required to determine the motion of the object.
W. Edwards Deming (1900–1993) was a leader in the use of statistical quality control.
A concept in parameter estimation that uses the variances of different estimators; essentially, an estimator is more eficient than another estimator if it has smaller variance. When estimators are biased, the concept requires modiication.
Any test of signiicance involving the F distribution. The most common F-tests are (1) testing hypotheses about the variances or standard deviations of two independent normal distributions, (2) testing hypotheses about treatment means or variance components in the analysis of variance, and (3) testing signiicance of regression or tests on subsets of parameters in a regression model.
Finite population correction factor
A term in the formula for the variance of a hypergeometric random variable.
Fisher’s least signiicant difference (LSD) method
A series of pair-wise hypothesis tests of treatment means in an experiment to determine which means differ.
A method of variable selection in regression, where variables are inserted one at a time into the model until no other variables that contribute signiicantly to the model can be found.