- Chapter 1: Exploring Data
- Chapter 1.1: Analyzing Categorical Data
- Chapter 1.2: Displaying Quantitative Data with Graphs
- Chapter 1.3: Describing Quantitative Data with Numbers
- Chapter 10: Comparing Two Populations or Groups
- Chapter 10.1: Comparing Two Proportions
- Chapter 10.2: Comparing Two Means
- Chapter 11: Inference for Ditribution of Categorical Data
- Chapter 11.1: Chi-Square Tests for Goodness of Fit
- Chapter 11.2: Inference for Two-Way Tables
- Chapter 12: More About Regression
- Chapter 12.1: Inference for Linear Regression
- Chapter 12.2: Transforming to Achieve Linearity
- Chapter 2: Modeling Distributions of Data
- Chapter 2.1: Describing Location in a Distribution
- Chapter 2.2: Density Curves and Normal Distributions
- Chapter 3: Describing Relationships
- Chapter 3.1: Scatterplots and Correlation
- Chapter 3.2: Least-Squares Regression
- Chapter 4: Designing Studies
- Chapter 4.1: Sampling and Surveys
- Chapter 4.2: Experiments
- Chapter 4.3: Using Studies Wisely
- Chapter 5: Probability: What Are The Chances
- Chapter 5.1: Randomness, Probability, and Simulation
- Chapter 5.2: Probability Rules
- Chapter 5.3: Conditional Probability and Independence
- Chapter 6: Random Variables
- Chapter 6.1: Discrete and Continuous Random Variables
- Chapter 6.2: Transforming and Combining Random Variables
- Chapter 6.3: Binomial and Geometric Random Variables
- Chapter 7: Sampling Distributions
- Chapter 7.1: What Is a Sampling Distribution?
- Chapter 7.2: Sample Proportions
- Chapter 7.3: Sample Means
- Chapter 8: Estimating With Confidence
- Chapter 8.1: Confidence Intervals: The Basics
- Chapter 8.2: Estimating a Population Proportion
- Chapter 8.3: Estimating a Population Mean
- Chapter 9: Testing A Claim
- Chapter 9.1: Significance Tests: The Basics
- Chapter 9.2: Tests about a Population Proportion
- Chapter 9.3: Tests about a Population Mean
- Chapter Introduction: Data Analysis: Making Sense of Data
The Practice of Statistics 5th Edition - Solutions by Chapter
Full solutions for The Practice of Statistics | 5th Edition
Additivity property of x 2
If two independent random variables X1 and X2 are distributed as chi-square with v1 and v2 degrees of freedom, respectively, Y = + X X 1 2 is a chi-square random variable with u = + v v 1 2 degrees of freedom. This generalizes to any number of independent chi-square random variables.
In a fractional factorial experiment when certain factor effects cannot be estimated uniquely, they are said to be aliased.
Analysis of variance (ANOVA)
A method of decomposing the total variability in a set of observations, as measured by the sum of the squares of these observations from their average, into component sums of squares that are associated with speciic deined sources of variation
A qualitative characteristic of an item or unit, usually arising in quality control. For example, classifying production units as defective or nondefective results in attributes data.
Data consisting of counts or observations that can be classiied into categories. The categories may be descriptive.
Another term for the conidence coeficient.
An estimator that converges in probability to the true value of the estimated parameter as the sample size increases.
A graphical display used to monitor a process. It usually consists of a horizontal center line corresponding to the in-control value of the parameter that is being monitored and lower and upper control limits. The control limits are determined by statistical criteria and are not arbitrary, nor are they related to speciication limits. If sample points fall within the control limits, the process is said to be in-control, or free from assignable causes. Points beyond the control limits indicate an out-of-control process; that is, assignable causes are likely present. This signals the need to ind and remove the assignable causes.
A measure of association between two random variables obtained as the expected value of the product of the two random variables around their means; that is, Cov(X Y, ) [( )( )] =? ? E X Y ? ? X Y .
Deming’s 14 points.
A management philosophy promoted by W. Edwards Deming that emphasizes the importance of change and quality
Distribution free method(s)
Any method of inference (hypothesis testing or conidence interval construction) that does not depend on the form of the underlying distribution of the observations. Sometimes called nonparametric method(s).
The variance of an error term or component in a model.
A subset of a sample space.
A property of a collection of events that indicates that their union equals the sample space.
A model that contains only irstorder terms. For example, the irst-order response surface model in two variables is y xx = + ?? ? ? 0 11 2 2 + + . A irst-order model is also called a main effects model
Fraction defective control chart
See P chart
Another name for the normal distribution, based on the strong connection of Karl F. Gauss to the normal distribution; often used in physics and electrical engineering applications
A function that is used to determine properties of the probability distribution of a random variable. See Moment-generating function
Effects in a fractional factorial experiment that are used to construct the experimental tests used in the experiment. The generators also deine the aliases.