×

×

# Solutions for Chapter 13: Comparing Three or More Means

## Full solutions for Statistics: Informed Decisions Using Data | 5th Edition

ISBN: 9780134133539

Solutions for Chapter 13: Comparing Three or More Means

Solutions for Chapter 13
4 5 0 337 Reviews
14
4
##### ISBN: 9780134133539

Summary of Chapter 13: Comparing Three or More Means

Comparing Three or More Means (One-Way Analysis of Variance). Post Hoc Tests on One-Way Analysis of Variance. The Randomized Complete Block Design. Two-Way Analysis of Variance.

This textbook survival guide was created for the textbook: Statistics: Informed Decisions Using Data, edition: 5. Statistics: Informed Decisions Using Data was written by and is associated to the ISBN: 9780134133539. Since 12 problems in chapter 13: Comparing Three or More Means have been answered, more than 28679 students have viewed full step-by-step solutions from this chapter. This expansive textbook survival guide covers the following chapters and their solutions. Chapter 13: Comparing Three or More Means includes 12 full step-by-step solutions.

Key Statistics Terms and definitions covered in this textbook
• -error (or -risk)

In hypothesis testing, an error incurred by rejecting a null hypothesis when it is actually true (also called a type I error).

• a-error (or a-risk)

In hypothesis testing, an error incurred by failing to reject a null hypothesis when it is actually false (also called a type II error).

• Alias

In a fractional factorial experiment when certain factor effects cannot be estimated uniquely, they are said to be aliased.

• All possible (subsets) regressions

A method of variable selection in regression that examines all possible subsets of the candidate regressor variables. Eficient computer algorithms have been developed for implementing all possible regressions

• Bernoulli trials

Sequences of independent trials with only two outcomes, generally called “success” and “failure,” in which the probability of success remains constant.

• Bivariate normal distribution

The joint distribution of two normal random variables

• Central limit theorem

The simplest form of the central limit theorem states that the sum of n independently distributed random variables will tend to be normally distributed as n becomes large. It is a necessary and suficient condition that none of the variances of the individual random variables are large in comparison to their sum. There are more general forms of the central theorem that allow ininite variances and correlated random variables, and there is a multivariate version of the theorem.

• Chi-square test

Any test of signiicance based on the chi-square distribution. The most common chi-square tests are (1) testing hypotheses about the variance or standard deviation of a normal distribution and (2) testing goodness of it of a theoretical distribution to sample data

• Combination.

A subset selected without replacement from a set used to determine the number of outcomes in events and sample spaces.

• Conditional probability mass function

The probability mass function of the conditional probability distribution of a discrete random variable.

• Continuity correction.

A correction factor used to improve the approximation to binomial probabilities from a normal distribution.

• Control chart

A graphical display used to monitor a process. It usually consists of a horizontal center line corresponding to the in-control value of the parameter that is being monitored and lower and upper control limits. The control limits are determined by statistical criteria and are not arbitrary, nor are they related to speciication limits. If sample points fall within the control limits, the process is said to be in-control, or free from assignable causes. Points beyond the control limits indicate an out-of-control process; that is, assignable causes are likely present. This signals the need to ind and remove the assignable causes.

• Correlation

In the most general usage, a measure of the interdependence among data. The concept may include more than two variables. The term is most commonly used in a narrow sense to express the relationship between quantitative variables or ranks.

• Covariance

A measure of association between two random variables obtained as the expected value of the product of the two random variables around their means; that is, Cov(X Y, ) [( )( )] =? ? E X Y ? ? X Y .

• Dependent variable

The response variable in regression or a designed experiment.

• Erlang random variable

A continuous random variable that is the sum of a ixed number of independent, exponential random variables.

• Error mean square

The error sum of squares divided by its number of degrees of freedom.

• First-order model

A model that contains only irstorder terms. For example, the irst-order response surface model in two variables is y xx = + ?? ? ? 0 11 2 2 + + . A irst-order model is also called a main effects model

• Fixed factor (or fixed effect).

In analysis of variance, a factor or effect is considered ixed if all the levels of interest for that factor are included in the experiment. Conclusions are then valid about this set of levels only, although when the factor is quantitative, it is customary to it a model to the data for interpolating between these levels.

• Generator

Effects in a fractional factorial experiment that are used to construct the experimental tests used in the experiment. The generators also deine the aliases.