- Chapter 1: Exploring Data
- Chapter 1.1: Analyzing Categorical Data
- Chapter 1.2: Displaying Quantitative Data with Graphs
- Chapter 1.3: Describing Quantitative Data with Numbers
- Chapter 10: Comparing Two Populations or Groups
- Chapter 10.1: Comparing Two Proportions
- Chapter 10.2: Comparing Two Means
- Chapter 11: Inference for Ditribution of Categorical Data
- Chapter 11.1: Chi-Square Tests for Goodness of Fit
- Chapter 11.2: Inference for Two-Way Tables
- Chapter 12: More About Regression
- Chapter 12.1: Inference for Linear Regression
- Chapter 12.2: Transforming to Achieve Linearity
- Chapter 2: Modeling Distributions of Data
- Chapter 2.1: Describing Location in a Distribution
- Chapter 2.2: Density Curves and Normal Distributions
- Chapter 3: Describing Relationships
- Chapter 3.1: Scatterplots and Correlation
- Chapter 3.2: Least-Squares Regression
- Chapter 4: Designing Studies
- Chapter 4.1: Sampling and Surveys
- Chapter 4.2: Experiments
- Chapter 4.3: Using Studies Wisely
- Chapter 5: Probability: What Are The Chances
- Chapter 5.1: Randomness, Probability, and Simulation
- Chapter 5.2: Probability Rules
- Chapter 5.3: Conditional Probability and Independence
- Chapter 6: Random Variables
- Chapter 6.1: Discrete and Continuous Random Variables
- Chapter 6.2: Transforming and Combining Random Variables
- Chapter 6.3: Binomial and Geometric Random Variables
- Chapter 7: Sampling Distributions
- Chapter 7.1: What Is a Sampling Distribution?
- Chapter 7.2: Sample Proportions
- Chapter 7.3: Sample Means
- Chapter 8: Estimating With Confidence
- Chapter 8.1: Confidence Intervals: The Basics
- Chapter 8.2: Estimating a Population Proportion
- Chapter 8.3: Estimating a Population Mean
- Chapter 9: Testing A Claim
- Chapter 9.1: Significance Tests: The Basics
- Chapter 9.2: Tests about a Population Proportion
- Chapter 9.3: Tests about a Population Mean
- Chapter Introduction: Data Analysis: Making Sense of Data
The Practice of Statistics 5th Edition - Solutions by Chapter
Full solutions for The Practice of Statistics | 5th Edition
a-error (or a-risk)
In hypothesis testing, an error incurred by failing to reject a null hypothesis when it is actually false (also called a type II error).
Additivity property of x 2
If two independent random variables X1 and X2 are distributed as chi-square with v1 and v2 degrees of freedom, respectively, Y = + X X 1 2 is a chi-square random variable with u = + v v 1 2 degrees of freedom. This generalizes to any number of independent chi-square random variables.
Adjusted R 2
A variation of the R 2 statistic that compensates for the number of parameters in a regression model. Essentially, the adjustment is a penalty for increasing the number of parameters in the model. Alias. In a fractional factorial experiment when certain factor effects cannot be estimated uniquely, they are said to be aliased.
See Arithmetic mean.
Any test of signiicance based on the chi-square distribution. The most common chi-square tests are (1) testing hypotheses about the variance or standard deviation of a normal distribution and (2) testing goodness of it of a theoretical distribution to sample data
Completely randomized design (or experiment)
A type of experimental design in which the treatments or design factors are assigned to the experimental units in a random manner. In designed experiments, a completely randomized design results from running all of the treatment combinations in random order.
A correction factor used to improve the approximation to binomial probabilities from a normal distribution.
In the most general usage, a measure of the interdependence among data. The concept may include more than two variables. The term is most commonly used in a narrow sense to express the relationship between quantitative variables or ranks.
A dimensionless measure of the linear association between two variables, usually lying in the interval from ?1 to +1, with zero indicating the absence of correlation (but not necessarily the independence of the two variables).
A square matrix that contains the correlations among a set of random variables, say, XX X 1 2 k , ,…, . The main diagonal elements of the matrix are unity and the off-diagonal elements rij are the correlations between Xi and Xj .
In hypothesis testing, this is the portion of the sample space of a test statistic that will lead to rejection of the null hypothesis.
A parameter in a tabular CUSUM algorithm that is determined from a trade-off between false alarms and the detection of assignable causes.
Defects-per-unit control chart
See U chart
Deming’s 14 points.
A management philosophy promoted by W. Edwards Deming that emphasizes the importance of change and quality
The response variable in regression or a designed experiment.
A probability distribution for a discrete random variable
Error of estimation
The difference between an estimated value and the true value.
An analysis of how the variance of the random variable that represents that output of a system depends on the variances of the inputs. A formula exists when the output is a linear function of the inputs and the formula is simpliied if the inputs are assumed to be independent.
Fisher’s least signiicant difference (LSD) method
A series of pair-wise hypothesis tests of treatment means in an experiment to determine which means differ.
The harmonic mean of a set of data values is the reciprocal of the arithmetic mean of the reciprocals of the data values; that is, h n x i n i = ? ? ? ? ? = ? ? 1 1 1 1 g .