×
Log in to StudySoup
Get Full Access to Statistics - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Statistics - Textbook Survival Guide

Already have an account? Login here
×
Reset your password

Textbooks / Statistics / The Practice of Statistics 5

The Practice of Statistics 5th Edition - Solutions by Chapter

Full solutions for The Practice of Statistics | 5th Edition

ISBN: 9781464108730

The Practice of Statistics | 5th Edition - Solutions by Chapter

Solutions by Chapter
4 5 0 330 Reviews
Textbook: The Practice of Statistics
Edition: 5
Author: Daren S. Starnes, Josh Tabor
ISBN: 9781464108730

The full step-by-step solution to problem in The Practice of Statistics were answered by , our top Statistics solution expert on 03/19/18, 03:52PM. This expansive textbook survival guide covers the following chapters: 44. Since problems from 44 chapters in The Practice of Statistics have been answered, more than 103764 students have viewed full step-by-step answer. This textbook survival guide was created for the textbook: The Practice of Statistics, edition: 5. The Practice of Statistics was written by and is associated to the ISBN: 9781464108730.

Key Statistics Terms and definitions covered in this textbook
  • 2 k factorial experiment.

    A full factorial experiment with k factors and all factors tested at only two levels (settings) each.

  • Alias

    In a fractional factorial experiment when certain factor effects cannot be estimated uniquely, they are said to be aliased.

  • All possible (subsets) regressions

    A method of variable selection in regression that examines all possible subsets of the candidate regressor variables. Eficient computer algorithms have been developed for implementing all possible regressions

  • Assignable cause

    The portion of the variability in a set of observations that can be traced to speciic causes, such as operators, materials, or equipment. Also called a special cause.

  • Backward elimination

    A method of variable selection in regression that begins with all of the candidate regressor variables in the model and eliminates the insigniicant regressors one at a time until only signiicant regressors remain

  • Binomial random variable

    A discrete random variable that equals the number of successes in a ixed number of Bernoulli trials.

  • Bivariate normal distribution

    The joint distribution of two normal random variables

  • Center line

    A horizontal line on a control chart at the value that estimates the mean of the statistic plotted on the chart. See Control chart.

  • Coeficient of determination

    See R 2 .

  • Comparative experiment

    An experiment in which the treatments (experimental conditions) that are to be studied are included in the experiment. The data from the experiment are used to evaluate the treatments.

  • Conditional probability

    The probability of an event given that the random experiment produces an outcome in another event.

  • Cook’s distance

    In regression, Cook’s distance is a measure of the inluence of each individual observation on the estimates of the regression model parameters. It expresses the distance that the vector of model parameter estimates with the ith observation removed lies from the vector of model parameter estimates based on all observations. Large values of Cook’s distance indicate that the observation is inluential.

  • Correlation

    In the most general usage, a measure of the interdependence among data. The concept may include more than two variables. The term is most commonly used in a narrow sense to express the relationship between quantitative variables or ranks.

  • Decision interval

    A parameter in a tabular CUSUM algorithm that is determined from a trade-off between false alarms and the detection of assignable causes.

  • Deining relation

    A subset of effects in a fractional factorial design that deine the aliases in the design.

  • Discrete random variable

    A random variable with a inite (or countably ininite) range.

  • Dispersion

    The amount of variability exhibited by data

  • Error sum of squares

    In analysis of variance, this is the portion of total variability that is due to the random component in the data. It is usually based on replication of observations at certain treatment combinations in the experiment. It is sometimes called the residual sum of squares, although this is really a better term to use only when the sum of squares is based on the remnants of a model-itting process and not on replication.

  • Finite population correction factor

    A term in the formula for the variance of a hypergeometric random variable.

  • Geometric mean.

    The geometric mean of a set of n positive data values is the nth root of the product of the data values; that is, g x i n i n = ( ) = / w 1 1 .