×
×

# Solutions for Chapter 6-3: The Normal Distribution

## Full solutions for Elementary Statistics: A Step by Step Approach | 7th Edition

ISBN: 9780073534978

Solutions for Chapter 6-3: The Normal Distribution

Solutions for Chapter 6-3
4 5 0 321 Reviews
29
5
##### ISBN: 9780073534978

This expansive textbook survival guide covers the following chapters and their solutions. Elementary Statistics: A Step by Step Approach was written by and is associated to the ISBN: 9780073534978. This textbook survival guide was created for the textbook: Elementary Statistics: A Step by Step Approach, edition: 7. Chapter 6-3: The Normal Distribution includes 30 full step-by-step solutions. Since 30 problems in chapter 6-3: The Normal Distribution have been answered, more than 27600 students have viewed full step-by-step solutions from this chapter.

Key Statistics Terms and definitions covered in this textbook
• Additivity property of x 2

If two independent random variables X1 and X2 are distributed as chi-square with v1 and v2 degrees of freedom, respectively, Y = + X X 1 2 is a chi-square random variable with u = + v v 1 2 degrees of freedom. This generalizes to any number of independent chi-square random variables.

• Attribute

A qualitative characteristic of an item or unit, usually arising in quality control. For example, classifying production units as defective or nondefective results in attributes data.

• Attribute control chart

Any control chart for a discrete random variable. See Variables control chart.

• Block

In experimental design, a group of experimental units or material that is relatively homogeneous. The purpose of dividing experimental units into blocks is to produce an experimental design wherein variability within blocks is smaller than variability between blocks. This allows the factors of interest to be compared in an environment that has less variability than in an unblocked experiment.

• Central composite design (CCD)

A second-order response surface design in k variables consisting of a two-level factorial, 2k axial runs, and one or more center points. The two-level factorial portion of a CCD can be a fractional factorial design when k is large. The CCD is the most widely used design for itting a second-order model.

• Central limit theorem

The simplest form of the central limit theorem states that the sum of n independently distributed random variables will tend to be normally distributed as n becomes large. It is a necessary and suficient condition that none of the variances of the individual random variables are large in comparison to their sum. There are more general forms of the central theorem that allow ininite variances and correlated random variables, and there is a multivariate version of the theorem.

• Chance cause

The portion of the variability in a set of observations that is due to only random forces and which cannot be traced to speciic sources, such as operators, materials, or equipment. Also called a common cause.

• Chi-square test

Any test of signiicance based on the chi-square distribution. The most common chi-square tests are (1) testing hypotheses about the variance or standard deviation of a normal distribution and (2) testing goodness of it of a theoretical distribution to sample data

• Comparative experiment

An experiment in which the treatments (experimental conditions) that are to be studied are included in the experiment. The data from the experiment are used to evaluate the treatments.

• Conditional probability density function

The probability density function of the conditional probability distribution of a continuous random variable.

• Confounding

When a factorial experiment is run in blocks and the blocks are too small to contain a complete replicate of the experiment, one can run a fraction of the replicate in each block, but this results in losing information on some effects. These effects are linked with or confounded with the blocks. In general, when two factors are varied such that their individual effects cannot be determined separately, their effects are said to be confounded.

• Consistent estimator

An estimator that converges in probability to the true value of the estimated parameter as the sample size increases.

• Correlation coeficient

A dimensionless measure of the linear association between two variables, usually lying in the interval from ?1 to +1, with zero indicating the absence of correlation (but not necessarily the independence of the two variables).

• Density function

Another name for a probability density function

• Empirical model

A model to relate a response to one or more regressors or factors that is developed from data obtained from the system.

• Enumerative study

A study in which a sample from a population is used to make inference to the population. See Analytic study

• Error sum of squares

In analysis of variance, this is the portion of total variability that is due to the random component in the data. It is usually based on replication of observations at certain treatment combinations in the experiment. It is sometimes called the residual sum of squares, although this is really a better term to use only when the sum of squares is based on the remnants of a model-itting process and not on replication.

• Estimate (or point estimate)

The numerical value of a point estimator.

• Fraction defective control chart

See P chart

• Gaussian distribution

Another name for the normal distribution, based on the strong connection of Karl F. Gauss to the normal distribution; often used in physics and electrical engineering applications

×