Log in to StudySoup
Get Full Access to Statistics - Textbook Survival Guide
Join StudySoup for FREE
Get Full Access to Statistics - Textbook Survival Guide

Already have an account? Login here
Reset your password

Solutions for Chapter 4.3: Elementary Statistics 12th Edition

Elementary Statistics | 12th Edition | ISBN: 9780321836960 | Authors: Mario F. Triola

Full solutions for Elementary Statistics | 12th Edition

ISBN: 9780321836960

Elementary Statistics | 12th Edition | ISBN: 9780321836960 | Authors: Mario F. Triola

Solutions for Chapter 4.3

Solutions for Chapter 4.3
4 5 0 298 Reviews
Textbook: Elementary Statistics
Edition: 12
Author: Mario F. Triola
ISBN: 9780321836960

This expansive textbook survival guide covers the following chapters and their solutions. Since 43 problems in chapter 4.3 have been answered, more than 411059 students have viewed full step-by-step solutions from this chapter. Elementary Statistics was written by and is associated to the ISBN: 9780321836960. Chapter 4.3 includes 43 full step-by-step solutions. This textbook survival guide was created for the textbook: Elementary Statistics, edition: 12.

Key Statistics Terms and definitions covered in this textbook
  • a-error (or a-risk)

    In hypothesis testing, an error incurred by failing to reject a null hypothesis when it is actually false (also called a type II error).

  • All possible (subsets) regressions

    A method of variable selection in regression that examines all possible subsets of the candidate regressor variables. Eficient computer algorithms have been developed for implementing all possible regressions

  • Assignable cause

    The portion of the variability in a set of observations that can be traced to speciic causes, such as operators, materials, or equipment. Also called a special cause.

  • Bayes’ theorem

    An equation for a conditional probability such as PA B ( | ) in terms of the reverse conditional probability PB A ( | ).

  • Binomial random variable

    A discrete random variable that equals the number of successes in a ixed number of Bernoulli trials.

  • Box plot (or box and whisker plot)

    A graphical display of data in which the box contains the middle 50% of the data (the interquartile range) with the median dividing it, and the whiskers extend to the smallest and largest values (or some deined lower and upper limits).

  • Central limit theorem

    The simplest form of the central limit theorem states that the sum of n independently distributed random variables will tend to be normally distributed as n becomes large. It is a necessary and suficient condition that none of the variances of the individual random variables are large in comparison to their sum. There are more general forms of the central theorem that allow ininite variances and correlated random variables, and there is a multivariate version of the theorem.

  • Coeficient of determination

    See R 2 .

  • Confounding

    When a factorial experiment is run in blocks and the blocks are too small to contain a complete replicate of the experiment, one can run a fraction of the replicate in each block, but this results in losing information on some effects. These effects are linked with or confounded with the blocks. In general, when two factors are varied such that their individual effects cannot be determined separately, their effects are said to be confounded.

  • Covariance

    A measure of association between two random variables obtained as the expected value of the product of the two random variables around their means; that is, Cov(X Y, ) [( )( )] =? ? E X Y ? ? X Y .

  • Discrete distribution

    A probability distribution for a discrete random variable

  • Distribution free method(s)

    Any method of inference (hypothesis testing or conidence interval construction) that does not depend on the form of the underlying distribution of the observations. Sometimes called nonparametric method(s).

  • Error propagation

    An analysis of how the variance of the random variable that represents that output of a system depends on the variances of the inputs. A formula exists when the output is a linear function of the inputs and the formula is simpliied if the inputs are assumed to be independent.

  • Error sum of squares

    In analysis of variance, this is the portion of total variability that is due to the random component in the data. It is usually based on replication of observations at certain treatment combinations in the experiment. It is sometimes called the residual sum of squares, although this is really a better term to use only when the sum of squares is based on the remnants of a model-itting process and not on replication.

  • Exponential random variable

    A series of tests in which changes are made to the system under study

  • F distribution.

    The distribution of the random variable deined as the ratio of two independent chi-square random variables, each divided by its number of degrees of freedom.

  • F-test

    Any test of signiicance involving the F distribution. The most common F-tests are (1) testing hypotheses about the variances or standard deviations of two independent normal distributions, (2) testing hypotheses about treatment means or variance components in the analysis of variance, and (3) testing signiicance of regression or tests on subsets of parameters in a regression model.

  • Fraction defective control chart

    See P chart

  • Frequency distribution

    An arrangement of the frequencies of observations in a sample or population according to the values that the observations take on

  • Harmonic mean

    The harmonic mean of a set of data values is the reciprocal of the arithmetic mean of the reciprocals of the data values; that is, h n x i n i = ? ? ? ? ? = ? ? 1 1 1 1 g .