- Chapter 1: The Nature of Probability and Statistics
- Chapter 1.4: Experimental Design
- Chapter 2: Frequency Distributions and Graphs
- Chapter 2.1: Organizing Data
- Chapter 2.2: Histograms, Frequency Polygons, and Ogives
- Chapter 2.3: Other Types of Graphs
- Chapter 3: Data Description
- Chapter 3.1: Measures of Central Tendency
- Chapter 3.2: Measures of Variation
- Chapter 3.3: Measures of Position
- Chapter 3.4: Exploratory Data Analysis
- Chapter 4: Probability and Counting Rules
- Chapter 4.1: Sample Spaces and Probability
- Chapter 4.2: The Addition Rules for Probability
- Chapter 4.3: The Multiplication Rules and Conditional Probability
- Chapter 4.4: Counting Rules
- Chapter 4.5: Probability and Counting Rules
- Chapter 5: Discrete Probability Distributions
- Chapter 5.1: Probability Distributions
- Chapter 5.2: Mean, Variance, Standard Deviation, and Expectation
- Chapter 5.3: The Binomial Distribution
- Chapter 5.4: Other Types of Distributions
- Chapter 6: The Normal Distribution
- Chapter 6.1: Normal Distributions
- Chapter 6.2: Applications of the Normal Distribution
- Chapter 6.3: The Central Limit Theorem
- Chapter 6.4: The Normal Approximation to the Binomial Distribution
- Chapter 7: Confidence Intervals and Sample Size
- Chapter 7.1: Confidence Intervals for the Mean When Is UnknowConfidence Intervals for the Mean When S Is Known
- Chapter 7.2: Confidence Intervals for the Mean When Is Unknown
- Chapter 7.3: Confidence Intervals and Sample Size for Proportions
- Chapter 7.4: Confidence Intervals for Variances and Standard Deviations
- Chapter 8: Hypothesis Testing
Elementary Statistics: A Step By Step Approach 9th Edition - Solutions by Chapter
Full solutions for Elementary Statistics: A Step By Step Approach | 9th Edition
Elementary Statistics: A Step By Step Approach | 9th Edition - Solutions by ChapterGet Full Solutions
Adjusted R 2
A variation of the R 2 statistic that compensates for the number of parameters in a regression model. Essentially, the adjustment is a penalty for increasing the number of parameters in the model. Alias. In a fractional factorial experiment when certain factor effects cannot be estimated uniquely, they are said to be aliased.
Asymptotic relative eficiency (ARE)
Used to compare hypothesis tests. The ARE of one test relative to another is the limiting ratio of the sample sizes necessary to obtain identical error probabilities for the two procedures.
A qualitative characteristic of an item or unit, usually arising in quality control. For example, classifying production units as defective or nondefective results in attributes data.
Attribute control chart
Any control chart for a discrete random variable. See Variables control chart.
A distribution with two modes
Central limit theorem
The simplest form of the central limit theorem states that the sum of n independently distributed random variables will tend to be normally distributed as n becomes large. It is a necessary and suficient condition that none of the variances of the individual random variables are large in comparison to their sum. There are more general forms of the central theorem that allow ininite variances and correlated random variables, and there is a multivariate version of the theorem.
The tendency of data to cluster around some value. Central tendency is usually expressed by a measure of location such as the mean, median, or mode.
Coeficient of determination
See R 2 .
Conditional probability distribution
The distribution of a random variable given that the random experiment produces an outcome in an event. The given event might specify values for one or more other random variables
When a factorial experiment is run in blocks and the blocks are too small to contain a complete replicate of the experiment, one can run a fraction of the replicate in each block, but this results in losing information on some effects. These effects are linked with or confounded with the blocks. In general, when two factors are varied such that their individual effects cannot be determined separately, their effects are said to be confounded.
In hypothesis testing, this is the portion of the sample space of a test statistic that will lead to rejection of the null hypothesis.
The value of a statistic corresponding to a stated signiicance level as determined from the sampling distribution. For example, if PZ z PZ ( )( .) . ? =? = 0 025 . 1 96 0 025, then z0 025 . = 1 9. 6 is the critical value of z at the 0.025 level of signiicance. Crossed factors. Another name for factors that are arranged in a factorial experiment.
Cumulative sum control chart (CUSUM)
A control chart in which the point plotted at time t is the sum of the measured deviations from target for all statistics up to time t
A subset of effects in a fractional factorial design that deine the aliases in the design.
The amount of variability exhibited by data
Another name for a cumulative distribution function.
Error sum of squares
In analysis of variance, this is the portion of total variability that is due to the random component in the data. It is usually based on replication of observations at certain treatment combinations in the experiment. It is sometimes called the residual sum of squares, although this is really a better term to use only when the sum of squares is based on the remnants of a model-itting process and not on replication.
Geometric random variable
A discrete random variable that is the number of Bernoulli trials until a success occurs.
Goodness of fit
In general, the agreement of a set of observed values and a set of theoretical values that depend on some hypothesis. The term is often used in itting a theoretical distribution to a set of observations.
In multiple regression, the matrix H XXX X = ( ) ? ? -1 . This a projection matrix that maps the vector of observed response values into a vector of itted values by yˆ = = X X X X y Hy ( ) ? ? ?1 .