- Chapter 1: The Nature of Probability and Statistics
- Chapter 10: Review Execises
- Chapter 10-1: Scatter Plots and Correlation
- Chapter 10-2: Regression
- Chapter 10-3: Coefficient of Determination and Standard Error of the Estimate
- Chapter 10-4: Multiple Regression (Optional
- Chapter 11: Review Execises
- Chapter 11-1: Test for Goodness of Fit
- Chapter 11-2: Tests Using Contingency Tables
- Chapter 12: Review Execises
- Chapter 12-1: One-Way Analysis of Variance
- Chapter 12-2: The Scheff Test and the Tukey Test
- Chapter 12-3: Two-Way Analysis of Variance
- Chapter 13: Review Execises
- Chapter 13-1: Advantages and Disadvantages of Nonparametric Methods
- Chapter 13-2: The Sign Test
- Chapter 13-3: The Wilcoxon Rank Sum Test
- Chapter 13-4: The Wilcoxon Signed-Rank Test
- Chapter 13-5: The Kruskal-Wallis Test
- Chapter 13-6: The Spearman Rank Correlation Coefficient and the Runs Test
- Chapter 14: Review Execises
- Chapter 14-1: Common Sampling Techniques
- Chapter 14-2: Surveys and Questionnaire Design
- Chapter 14-3: Simulation Techniques and the Monte Carlo Method
- Chapter 2: Frequency Distributions and Graphs
- Chapter 2-1: Organizing Data
- Chapter 2-2: Histograms, Frequency Polygons, and Ogives
- Chapter 2-3: Other Types of Graphs
- Chapter 3: Data Description
- Chapter 3-1: Measures of Central Tendency
- Chapter 3-2: Measures of Variation
- Chapter 3-3: Measures of Position
- Chapter 3-4: Exploratory Data Analysis
- Chapter 4: Probability and Counting Rules
- Chapter 4-1: Sample Spaces and Probability
- Chapter 4-2: The Addition Rules for Probability
- Chapter 4-3: The Multiplication Rules and Conditional Probability
- Chapter 4-4: Counting Rules
- Chapter 4-5: Probability and Counting Rules
- Chapter 5: Review Execises
- Chapter 5-1: Probability Distributions
- Chapter 5-2: Mean, Variance, Standard Deviation, and Expectation
- Chapter 5-3: The Binomial Distribution
- Chapter 5-4: Other Types of Distributions (Optional)
- Chapter 6: Review Execises
- Chapter 6-1: Normal Distributions
- Chapter 6-2: Applications of the Normal Distribution
- Chapter 6-3: The Central Limit Theorem
- Chapter 6-4: The Normal Approximation to the Binomial Distribution
- Chapter 7: Review Execises
- Chapter 7-1: Confidence Intervals for the Mean When s Is Known
- Chapter 7-2: Confidence Intervals for the Mean When s Is Unknown
- Chapter 7-3: Confidence Intervals and Sample Size for Proportions
- Chapter 7-4: Confidence Intervals for Variances and Standard Deviations
- Chapter 8: Review Execises
- Chapter 8-1: Steps in Hypothesis TestingTraditional Method
- Chapter 8-2: z Test for a Mean
- Chapter 8-3: t Test for a Mean
- Chapter 8-4: z Test for a Proportion
- Chapter 8-5: x2 Test for a Variance or Standard Deviation
- Chapter 8-6: Additional Topics Regarding Hypothesis Testing
- Chapter 9: Review Execises
- Chapter 9-1: Testing the Difference Between Two Means: Using the z Test
- Chapter 9-2: Testing the Difference Between Two Means of Independent Samples: Using the t Test
- Chapter 9-3: Testing the Difference Between Two Means: Dependent Samples
- Chapter 9-4: Testing the Difference Between Proportions
- Chapter 9-5: Testing the Difference Between Two Variances
Elementary Statistics: A Step by Step Approach 8th ed. 8th Edition - Solutions by Chapter
Full solutions for Elementary Statistics: A Step by Step Approach 8th ed. | 8th Edition
Elementary Statistics: A Step by Step Approach 8th ed. | 8th Edition - Solutions by ChapterGet Full Solutions
2 k factorial experiment.
A full factorial experiment with k factors and all factors tested at only two levels (settings) each.
`-error (or `-risk)
In hypothesis testing, an error incurred by rejecting a null hypothesis when it is actually true (also called a type I error).
Adjusted R 2
A variation of the R 2 statistic that compensates for the number of parameters in a regression model. Essentially, the adjustment is a penalty for increasing the number of parameters in the model. Alias. In a fractional factorial experiment when certain factor effects cannot be estimated uniquely, they are said to be aliased.
Average run length, or ARL
The average number of samples taken in a process monitoring or inspection scheme until the scheme signals that the process is operating at a level different from the level in which it began.
An effect that systematically distorts a statistical result or estimate, preventing it from representing the true quantity of interest.
The tendency of data to cluster around some value. Central tendency is usually expressed by a measure of location such as the mean, median, or mode.
Coeficient of determination
See R 2 .
The probability 1?a associated with a conidence interval expressing the probability that the stated interval will contain the true parameter value.
Formulas used to determine the number of elements in sample spaces and events.
In hypothesis testing, this is the portion of the sample space of a test statistic that will lead to rejection of the null hypothesis.
The value of a statistic corresponding to a stated signiicance level as determined from the sampling distribution. For example, if PZ z PZ ( )( .) . ? =? = 0 025 . 1 96 0 025, then z0 025 . = 1 9. 6 is the critical value of z at the 0.025 level of signiicance. Crossed factors. Another name for factors that are arranged in a factorial experiment.
Another name for factors that are arranged in a factorial experiment.
Deming’s 14 points.
A management philosophy promoted by W. Edwards Deming that emphasizes the importance of change and quality
Distribution free method(s)
Any method of inference (hypothesis testing or conidence interval construction) that does not depend on the form of the underlying distribution of the observations. Sometimes called nonparametric method(s).
A subset of a sample space.
A property of a collection of events that indicates that their union equals the sample space.
The expected value of a random variable X is its long-term average or mean value. In the continuous case, the expected value of X is E X xf x dx ( ) = ?? ( ) ? ? where f ( ) x is the density function of the random variable X.
A series of tests in which changes are made to the system under study
An arrangement of the frequencies of observations in a sample or population according to the values that the observations take on