×
Log in to StudySoup
Get Full Access to Statistics - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Statistics - Textbook Survival Guide

Solutions for Chapter 11-8: ADEQUACY OF THE REGRESSION MODEL

Applied Statistics and Probability for Engineers | 3rd Edition | ISBN: 9780471204541 | Authors: Douglas C. Montgomery, George C. Runger

Full solutions for Applied Statistics and Probability for Engineers | 3rd Edition

ISBN: 9780471204541

Applied Statistics and Probability for Engineers | 3rd Edition | ISBN: 9780471204541 | Authors: Douglas C. Montgomery, George C. Runger

Solutions for Chapter 11-8: ADEQUACY OF THE REGRESSION MODEL

Applied Statistics and Probability for Engineers was written by and is associated to the ISBN: 9780471204541. Chapter 11-8: ADEQUACY OF THE REGRESSION MODEL includes 13 full step-by-step solutions. This textbook survival guide was created for the textbook: Applied Statistics and Probability for Engineers , edition: 3. This expansive textbook survival guide covers the following chapters and their solutions. Since 13 problems in chapter 11-8: ADEQUACY OF THE REGRESSION MODEL have been answered, more than 22595 students have viewed full step-by-step solutions from this chapter.

Key Statistics Terms and definitions covered in this textbook
  • a-error (or a-risk)

    In hypothesis testing, an error incurred by failing to reject a null hypothesis when it is actually false (also called a type II error).

  • All possible (subsets) regressions

    A method of variable selection in regression that examines all possible subsets of the candidate regressor variables. Eficient computer algorithms have been developed for implementing all possible regressions

  • Analysis of variance (ANOVA)

    A method of decomposing the total variability in a set of observations, as measured by the sum of the squares of these observations from their average, into component sums of squares that are associated with speciic deined sources of variation

  • Asymptotic relative eficiency (ARE)

    Used to compare hypothesis tests. The ARE of one test relative to another is the limiting ratio of the sample sizes necessary to obtain identical error probabilities for the two procedures.

  • Bernoulli trials

    Sequences of independent trials with only two outcomes, generally called “success” and “failure,” in which the probability of success remains constant.

  • Central composite design (CCD)

    A second-order response surface design in k variables consisting of a two-level factorial, 2k axial runs, and one or more center points. The two-level factorial portion of a CCD can be a fractional factorial design when k is large. The CCD is the most widely used design for itting a second-order model.

  • Central limit theorem

    The simplest form of the central limit theorem states that the sum of n independently distributed random variables will tend to be normally distributed as n becomes large. It is a necessary and suficient condition that none of the variances of the individual random variables are large in comparison to their sum. There are more general forms of the central theorem that allow ininite variances and correlated random variables, and there is a multivariate version of the theorem.

  • Chi-square (or chi-squared) random variable

    A continuous random variable that results from the sum of squares of independent standard normal random variables. It is a special case of a gamma random variable.

  • Conditional mean

    The mean of the conditional probability distribution of a random variable.

  • Conidence interval

    If it is possible to write a probability statement of the form PL U ( ) ? ? ? ? = ?1 where L and U are functions of only the sample data and ? is a parameter, then the interval between L and U is called a conidence interval (or a 100 1( )% ? ? conidence interval). The interpretation is that a statement that the parameter ? lies in this interval will be true 100 1( )% ? ? of the times that such a statement is made

  • Counting techniques

    Formulas used to determine the number of elements in sample spaces and events.

  • Covariance

    A measure of association between two random variables obtained as the expected value of the product of the two random variables around their means; that is, Cov(X Y, ) [( )( )] =? ? E X Y ? ? X Y .

  • Covariance matrix

    A square matrix that contains the variances and covariances among a set of random variables, say, X1 , X X 2 k , , … . The main diagonal elements of the matrix are the variances of the random variables and the off-diagonal elements are the covariances between Xi and Xj . Also called the variance-covariance matrix. When the random variables are standardized to have unit variances, the covariance matrix becomes the correlation matrix.

  • Defect

    Used in statistical quality control, a defect is a particular type of nonconformance to speciications or requirements. Sometimes defects are classiied into types, such as appearance defects and functional defects.

  • Deming’s 14 points.

    A management philosophy promoted by W. Edwards Deming that emphasizes the importance of change and quality

  • Density function

    Another name for a probability density function

  • Distribution free method(s)

    Any method of inference (hypothesis testing or conidence interval construction) that does not depend on the form of the underlying distribution of the observations. Sometimes called nonparametric method(s).

  • Error propagation

    An analysis of how the variance of the random variable that represents that output of a system depends on the variances of the inputs. A formula exists when the output is a linear function of the inputs and the formula is simpliied if the inputs are assumed to be independent.

  • Estimator (or point estimator)

    A procedure for producing an estimate of a parameter of interest. An estimator is usually a function of only sample data values, and when these data values are available, it results in an estimate of the parameter of interest.

  • Fractional factorial experiment

    A type of factorial experiment in which not all possible treatment combinations are run. This is usually done to reduce the size of an experiment with several factors.

×
Log in to StudySoup
Get Full Access to Statistics - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Statistics - Textbook Survival Guide
×
Reset your password