- Chapter 1: The Nature of Probability and Statistics
- Chapter 10: Review Execises
- Chapter 10-1: Scatter Plots and Correlation
- Chapter 10-2: Regression
- Chapter 10-3: Coefficient of Determination and Standard Error of the Estimate
- Chapter 10-4: Multiple Regression (Optional
- Chapter 11: Review Execises
- Chapter 11-1: Test for Goodness of Fit
- Chapter 11-2: Tests Using Contingency Tables
- Chapter 12: Review Execises
- Chapter 12-1: One-Way Analysis of Variance
- Chapter 12-2: The Scheff Test and the Tukey Test
- Chapter 12-3: Two-Way Analysis of Variance
- Chapter 13: Review Execises
- Chapter 13-1: Advantages and Disadvantages of Nonparametric Methods
- Chapter 13-2: The Sign Test
- Chapter 13-3: The Wilcoxon Rank Sum Test
- Chapter 13-4: The Wilcoxon Signed-Rank Test
- Chapter 13-5: The Kruskal-Wallis Test
- Chapter 13-6: The Spearman Rank Correlation Coefficient and the Runs Test
- Chapter 14: Review Execises
- Chapter 14-1: Common Sampling Techniques
- Chapter 14-2: Surveys and Questionnaire Design
- Chapter 14-3: Simulation Techniques and the Monte Carlo Method
- Chapter 2: Frequency Distributions and Graphs
- Chapter 2-1: Organizing Data
- Chapter 2-2: Histograms, Frequency Polygons, and Ogives
- Chapter 2-3: Other Types of Graphs
- Chapter 3: Data Description
- Chapter 3-1: Measures of Central Tendency
- Chapter 3-2: Measures of Variation
- Chapter 3-3: Measures of Position
- Chapter 3-4: Exploratory Data Analysis
- Chapter 4: Probability and Counting Rules
- Chapter 4-1: Sample Spaces and Probability
- Chapter 4-2: The Addition Rules for Probability
- Chapter 4-3: The Multiplication Rules and Conditional Probability
- Chapter 4-4: Counting Rules
- Chapter 4-5: Probability and Counting Rules
- Chapter 5: Review Execises
- Chapter 5-1: Probability Distributions
- Chapter 5-2: Mean, Variance, Standard Deviation, and Expectation
- Chapter 5-3: The Binomial Distribution
- Chapter 5-4: Other Types of Distributions (Optional)
- Chapter 6: Review Execises
- Chapter 6-1: Normal Distributions
- Chapter 6-2: Applications of the Normal Distribution
- Chapter 6-3: The Central Limit Theorem
- Chapter 6-4: The Normal Approximation to the Binomial Distribution
- Chapter 7: Review Execises
- Chapter 7-1: Confidence Intervals for the Mean When s Is Known
- Chapter 7-2: Confidence Intervals for the Mean When s Is Unknown
- Chapter 7-3: Confidence Intervals and Sample Size for Proportions
- Chapter 7-4: Confidence Intervals for Variances and Standard Deviations
- Chapter 8: Review Execises
- Chapter 8-1: Steps in Hypothesis TestingTraditional Method
- Chapter 8-2: z Test for a Mean
- Chapter 8-3: t Test for a Mean
- Chapter 8-4: z Test for a Proportion
- Chapter 8-5: x2 Test for a Variance or Standard Deviation
- Chapter 8-6: Additional Topics Regarding Hypothesis Testing
- Chapter 9: Review Execises
- Chapter 9-1: Testing the Difference Between Two Means: Using the z Test
- Chapter 9-2: Testing the Difference Between Two Means of Independent Samples: Using the t Test
- Chapter 9-3: Testing the Difference Between Two Means: Dependent Samples
- Chapter 9-4: Testing the Difference Between Proportions
- Chapter 9-5: Testing the Difference Between Two Variances
Elementary Statistics: A Step by Step Approach 8th ed. 8th Edition - Solutions by Chapter
Full solutions for Elementary Statistics: A Step by Step Approach 8th ed. | 8th Edition
Elementary Statistics: A Step by Step Approach 8th ed. | 8th Edition - Solutions by ChapterGet Full Solutions
Additivity property of x 2
If two independent random variables X1 and X2 are distributed as chi-square with v1 and v2 degrees of freedom, respectively, Y = + X X 1 2 is a chi-square random variable with u = + v v 1 2 degrees of freedom. This generalizes to any number of independent chi-square random variables.
The portion of the variability in a set of observations that can be traced to speciic causes, such as operators, materials, or equipment. Also called a special cause.
Sequences of independent trials with only two outcomes, generally called “success” and “failure,” in which the probability of success remains constant.
Data consisting of counts or observations that can be classiied into categories. The categories may be descriptive.
Conditional probability mass function
The probability mass function of the conditional probability distribution of a discrete random variable.
A linear function of treatment means with coeficients that total zero. A contrast is a summary of treatment means that is of interest in an experiment.
A square matrix that contains the variances and covariances among a set of random variables, say, X1 , X X 2 k , , … . The main diagonal elements of the matrix are the variances of the random variables and the off-diagonal elements are the covariances between Xi and Xj . Also called the variance-covariance matrix. When the random variables are standardized to have unit variances, the covariance matrix becomes the correlation matrix.
An expression sometimes used for nonlinear regression models or polynomial regression models.
Defect concentration diagram
A quality tool that graphically shows the location of defects on a part or in a process.
W. Edwards Deming (1900–1993) was a leader in the use of statistical quality control.
Deming’s 14 points.
A management philosophy promoted by W. Edwards Deming that emphasizes the importance of change and quality
A matrix that provides the tests that are to be conducted in an experiment.
A concept in parameter estimation that uses the variances of different estimators; essentially, an estimator is more eficient than another estimator if it has smaller variance. When estimators are biased, the concept requires modiication.
A model to relate a response to one or more regressors or factors that is developed from data obtained from the system.
An analysis of how the variance of the random variable that represents that output of a system depends on the variances of the inputs. A formula exists when the output is a linear function of the inputs and the formula is simpliied if the inputs are assumed to be independent.
Estimator (or point estimator)
A procedure for producing an estimate of a parameter of interest. An estimator is usually a function of only sample data values, and when these data values are available, it results in an estimate of the parameter of interest.
A property of a collection of events that indicates that their union equals the sample space.
Fractional factorial experiment
A type of factorial experiment in which not all possible treatment combinations are run. This is usually done to reduce the size of an experiment with several factors.
Another name for the normal distribution, based on the strong connection of Karl F. Gauss to the normal distribution; often used in physics and electrical engineering applications