 104.1: What is the dependent variable?
 104.2: What are the independent variables?
 104.3: What are the multiple regression assumptions?
 104.4: Explain what 4540 and 1290 in the equation tell us.
 104.5: What is the predicted income if a person took 8 math classes and wo...
 104.6: What does a multiple correlation coefficient of 0.77 mean?
 104.7: Compute R2 .
 104.8: Compute the adjusted R2 .
 104.9: Would the equation be considered a good predictor of income?
 104.10: What are your conclusions about the relationship among courses take...
 104.1: Explain the similarities and differences between simple linear regr...
 104.2: What is the general form of the multiple regression equation? What ...
 104.3: Why would a researcher prefer to conduct a multiple regression stud...
 104.4: What are the assumptions for multiple regression?
 104.5: How do the values of the individual correlation coefficients compar...
 104.6: Age, GPA, and Income A researcher has determined that a significant...
 104.7: Assembly Line Work A manufacturer found that a significant relation...
 104.8: Fat, Calories, and Carbohydrates A nutritionist established a signi...
 104.9: Aspects of StudentsAcademic Behavior A college statistics professor...
 104.10: Age, Cholesterol, and Sodium A medical researcher found a significa...
 104.11: Explain the meaning of the multiple correlation coefficient R.
 104.12: What is the range of values R can assume?
 104.13: Define R2 and .
 104.14: What are the hypotheses used to test the significance of R?
 104.15: What test is used to test the significance of R?
 104.16: What is the meaning of the adjusted R2 ? Why is it computed?
Solutions for Chapter 104: Correlation and Regression
Full solutions for Elementary Statistics: A Step by Step Approach  7th Edition
ISBN: 9780073534978
Solutions for Chapter 104: Correlation and Regression
Get Full SolutionsChapter 104: Correlation and Regression includes 26 full stepbystep solutions. This textbook survival guide was created for the textbook: Elementary Statistics: A Step by Step Approach, edition: 7. This expansive textbook survival guide covers the following chapters and their solutions. Elementary Statistics: A Step by Step Approach was written by and is associated to the ISBN: 9780073534978. Since 26 problems in chapter 104: Correlation and Regression have been answered, more than 30594 students have viewed full stepbystep solutions from this chapter.

Arithmetic mean
The arithmetic mean of a set of numbers x1 , x2 ,…, xn is their sum divided by the number of observations, or ( / )1 1 n xi t n ? = . The arithmetic mean is usually denoted by x , and is often called the average

Backward elimination
A method of variable selection in regression that begins with all of the candidate regressor variables in the model and eliminates the insigniicant regressors one at a time until only signiicant regressors remain

Binomial random variable
A discrete random variable that equals the number of successes in a ixed number of Bernoulli trials.

Central composite design (CCD)
A secondorder response surface design in k variables consisting of a twolevel factorial, 2k axial runs, and one or more center points. The twolevel factorial portion of a CCD can be a fractional factorial design when k is large. The CCD is the most widely used design for itting a secondorder model.

Conditional probability mass function
The probability mass function of the conditional probability distribution of a discrete random variable.

Confounding
When a factorial experiment is run in blocks and the blocks are too small to contain a complete replicate of the experiment, one can run a fraction of the replicate in each block, but this results in losing information on some effects. These effects are linked with or confounded with the blocks. In general, when two factors are varied such that their individual effects cannot be determined separately, their effects are said to be confounded.

Conidence coeficient
The probability 1?a associated with a conidence interval expressing the probability that the stated interval will contain the true parameter value.

Continuous distribution
A probability distribution for a continuous random variable.

Correlation matrix
A square matrix that contains the correlations among a set of random variables, say, XX X 1 2 k , ,…, . The main diagonal elements of the matrix are unity and the offdiagonal elements rij are the correlations between Xi and Xj .

Critical region
In hypothesis testing, this is the portion of the sample space of a test statistic that will lead to rejection of the null hypothesis.

Critical value(s)
The value of a statistic corresponding to a stated signiicance level as determined from the sampling distribution. For example, if PZ z PZ ( )( .) . ? =? = 0 025 . 1 96 0 025, then z0 025 . = 1 9. 6 is the critical value of z at the 0.025 level of signiicance. Crossed factors. Another name for factors that are arranged in a factorial experiment.

Crossed factors
Another name for factors that are arranged in a factorial experiment.

Curvilinear regression
An expression sometimes used for nonlinear regression models or polynomial regression models.

Defectsperunit control chart
See U chart

Design matrix
A matrix that provides the tests that are to be conducted in an experiment.

Dispersion
The amount of variability exhibited by data

Distribution function
Another name for a cumulative distribution function.

Error sum of squares
In analysis of variance, this is the portion of total variability that is due to the random component in the data. It is usually based on replication of observations at certain treatment combinations in the experiment. It is sometimes called the residual sum of squares, although this is really a better term to use only when the sum of squares is based on the remnants of a modelitting process and not on replication.

Extra sum of squares method
A method used in regression analysis to conduct a hypothesis test for the additional contribution of one or more variables to a model.

Geometric mean.
The geometric mean of a set of n positive data values is the nth root of the product of the data values; that is, g x i n i n = ( ) = / w 1 1 .