- Chapter 1: Picturing Distributions with Graphs
- Chapter 10: Introducing Probability
- Chapter 11: Sampling Distributions
- Chapter 12: General Rules of Probability
- Chapter 13: Binomial Distributions
- Chapter 14: Confidence Intervals: The Basics
- Chapter 15: Tests of Significance: The Basics
- Chapter 16: Inference in Practice
- Chapter 17: From Exploration to Inference: Part II Review
- Chapter 18: Inference about a Population Mean
- Chapter 19: Two-Sample Problems
- Chapter 2: Describing Distributions with Numbers
- Chapter 20: Inference about a Population Proportion
- Chapter 21: Comparing Two Proportions
- Chapter 22: Inference about Variables: Part III Review
- Chapter 23: Two Categorical Variables: The Chi-Square Test
- Chapter 24: Inference for Regression
- Chapter 25: One-Way Analysis of Variance: Comparing Several Means
- Chapter 26: Nonparametric Tests
- Chapter 27: Statistical Process Control
- Chapter 28: Multiple Regression
- Chapter 3: The Normal Distributions
- Chapter 4: Scatterplots and Correlation
- Chapter 5: Regression
- Chapter 6: Two-Way Tables
- Chapter 7: Exploring Data: Part I Review
- Chapter 8: Producing Data: Sampling
- Chapter 9: Producing Data: Experiments
The Basic Practice of Statistics 4th Edition - Solutions by Chapter
Full solutions for The Basic Practice of Statistics | 4th Edition
In hypothesis testing, a region in the sample space of the test statistic such that if the test statistic falls within it, the null hypothesis cannot be rejected. This terminology is used because rejection of H0 is always a strong conclusion and acceptance of H0 is generally a weak conclusion
A formula used to determine the probability of the union of two (or more) events from the probabilities of the events and their intersection(s).
The portion of the variability in a set of observations that can be traced to speciic causes, such as operators, materials, or equipment. Also called a special cause.
Box plot (or box and whisker plot)
A graphical display of data in which the box contains the middle 50% of the data (the interquartile range) with the median dividing it, and the whiskers extend to the smallest and largest values (or some deined lower and upper limits).
Central limit theorem
The simplest form of the central limit theorem states that the sum of n independently distributed random variables will tend to be normally distributed as n becomes large. It is a necessary and suficient condition that none of the variances of the individual random variables are large in comparison to their sum. There are more general forms of the central theorem that allow ininite variances and correlated random variables, and there is a multivariate version of the theorem.
The tendency of data to cluster around some value. Central tendency is usually expressed by a measure of location such as the mean, median, or mode.
An experiment in which the treatments (experimental conditions) that are to be studied are included in the experiment. The data from the experiment are used to evaluate the treatments.
The probability of an event given that the random experiment produces an outcome in another event.
If it is possible to write a probability statement of the form PL U ( ) ? ? ? ? = ?1 where L and U are functions of only the sample data and ? is a parameter, then the interval between L and U is called a conidence interval (or a 100 1( )% ? ? conidence interval). The interpretation is that a statement that the parameter ? lies in this interval will be true 100 1( )% ? ? of the times that such a statement is made
See Control chart.
Cumulative distribution function
For a random variable X, the function of X deined as PX x ( ) ? that is used to specify the probability distribution.
An expression sometimes used for nonlinear regression models or polynomial regression models.
Used in statistical quality control, a defect is a particular type of nonconformance to speciications or requirements. Sometimes defects are classiied into types, such as appearance defects and functional defects.
A subset of effects in a fractional factorial design that deine the aliases in the design.
An experiment in which the tests are planned in advance and the plans usually incorporate statistical models. See Experiment
A probability distribution for a discrete random variable
Error sum of squares
In analysis of variance, this is the portion of total variability that is due to the random component in the data. It is usually based on replication of observations at certain treatment combinations in the experiment. It is sometimes called the residual sum of squares, although this is really a better term to use only when the sum of squares is based on the remnants of a model-itting process and not on replication.
A type of experimental design in which every level of one factor is tested in combination with every level of another factor. In general, in a factorial experiment, all possible combinations of factor levels are tested.
The harmonic mean of a set of data values is the reciprocal of the arithmetic mean of the reciprocals of the data values; that is, h n x i n i = ? ? ? ? ? = ? ? 1 1 1 1 g .