×
×

# Solutions for Chapter 10-4: Correlation and Regression

## Full solutions for Elementary Statistics: A Step by Step Approach | 7th Edition

ISBN: 9780073534978

Solutions for Chapter 10-4: Correlation and Regression

Solutions for Chapter 10-4
4 5 0 279 Reviews
18
4
##### ISBN: 9780073534978

Chapter 10-4: Correlation and Regression includes 26 full step-by-step solutions. This textbook survival guide was created for the textbook: Elementary Statistics: A Step by Step Approach, edition: 7. This expansive textbook survival guide covers the following chapters and their solutions. Elementary Statistics: A Step by Step Approach was written by and is associated to the ISBN: 9780073534978. Since 26 problems in chapter 10-4: Correlation and Regression have been answered, more than 30594 students have viewed full step-by-step solutions from this chapter.

Key Statistics Terms and definitions covered in this textbook
• Arithmetic mean

The arithmetic mean of a set of numbers x1 , x2 ,…, xn is their sum divided by the number of observations, or ( / )1 1 n xi t n ? = . The arithmetic mean is usually denoted by x , and is often called the average

• Backward elimination

A method of variable selection in regression that begins with all of the candidate regressor variables in the model and eliminates the insigniicant regressors one at a time until only signiicant regressors remain

• Binomial random variable

A discrete random variable that equals the number of successes in a ixed number of Bernoulli trials.

• Central composite design (CCD)

A second-order response surface design in k variables consisting of a two-level factorial, 2k axial runs, and one or more center points. The two-level factorial portion of a CCD can be a fractional factorial design when k is large. The CCD is the most widely used design for itting a second-order model.

• Conditional probability mass function

The probability mass function of the conditional probability distribution of a discrete random variable.

• Confounding

When a factorial experiment is run in blocks and the blocks are too small to contain a complete replicate of the experiment, one can run a fraction of the replicate in each block, but this results in losing information on some effects. These effects are linked with or confounded with the blocks. In general, when two factors are varied such that their individual effects cannot be determined separately, their effects are said to be confounded.

• Conidence coeficient

The probability 1?a associated with a conidence interval expressing the probability that the stated interval will contain the true parameter value.

• Continuous distribution

A probability distribution for a continuous random variable.

• Correlation matrix

A square matrix that contains the correlations among a set of random variables, say, XX X 1 2 k , ,…, . The main diagonal elements of the matrix are unity and the off-diagonal elements rij are the correlations between Xi and Xj .

• Critical region

In hypothesis testing, this is the portion of the sample space of a test statistic that will lead to rejection of the null hypothesis.

• Critical value(s)

The value of a statistic corresponding to a stated signiicance level as determined from the sampling distribution. For example, if PZ z PZ ( )( .) . ? =? = 0 025 . 1 96 0 025, then z0 025 . = 1 9. 6 is the critical value of z at the 0.025 level of signiicance. Crossed factors. Another name for factors that are arranged in a factorial experiment.

• Crossed factors

Another name for factors that are arranged in a factorial experiment.

• Curvilinear regression

An expression sometimes used for nonlinear regression models or polynomial regression models.

• Defects-per-unit control chart

See U chart

• Design matrix

A matrix that provides the tests that are to be conducted in an experiment.

• Dispersion

The amount of variability exhibited by data

• Distribution function

Another name for a cumulative distribution function.

• Error sum of squares

In analysis of variance, this is the portion of total variability that is due to the random component in the data. It is usually based on replication of observations at certain treatment combinations in the experiment. It is sometimes called the residual sum of squares, although this is really a better term to use only when the sum of squares is based on the remnants of a model-itting process and not on replication.

• Extra sum of squares method

A method used in regression analysis to conduct a hypothesis test for the additional contribution of one or more variables to a model.

• Geometric mean.

The geometric mean of a set of n positive data values is the nth root of the product of the data values; that is, g x i n i n = ( ) = / w 1 1 .

×