×
×

# Solutions for Chapter Chapter 3: The Normal Distributions

## Full solutions for The Basic Practice of Statistics | 4th Edition

ISBN: 9780716774785

Solutions for Chapter Chapter 3: The Normal Distributions

Solutions for Chapter Chapter 3
4 5 0 379 Reviews
23
2
##### ISBN: 9780716774785

Chapter Chapter 3: The Normal Distributions includes 48 full step-by-step solutions. The Basic Practice of Statistics was written by and is associated to the ISBN: 9780716774785. Since 48 problems in chapter Chapter 3: The Normal Distributions have been answered, more than 7665 students have viewed full step-by-step solutions from this chapter. This expansive textbook survival guide covers the following chapters and their solutions. This textbook survival guide was created for the textbook: The Basic Practice of Statistics, edition: 4.

Key Statistics Terms and definitions covered in this textbook

A variation of the R 2 statistic that compensates for the number of parameters in a regression model. Essentially, the adjustment is a penalty for increasing the number of parameters in the model. Alias. In a fractional factorial experiment when certain factor effects cannot be estimated uniquely, they are said to be aliased.

• Analytic study

A study in which a sample from a population is used to make inference to a future population. Stability needs to be assumed. See Enumerative study

• Categorical data

Data consisting of counts or observations that can be classiied into categories. The categories may be descriptive.

• Cause-and-effect diagram

A chart used to organize the various potential causes of a problem. Also called a ishbone diagram.

• Center line

A horizontal line on a control chart at the value that estimates the mean of the statistic plotted on the chart. See Control chart.

• Central limit theorem

The simplest form of the central limit theorem states that the sum of n independently distributed random variables will tend to be normally distributed as n becomes large. It is a necessary and suficient condition that none of the variances of the individual random variables are large in comparison to their sum. There are more general forms of the central theorem that allow ininite variances and correlated random variables, and there is a multivariate version of the theorem.

• Conditional mean

The mean of the conditional probability distribution of a random variable.

• Conditional probability density function

The probability density function of the conditional probability distribution of a continuous random variable.

• Contour plot

A two-dimensional graphic used for a bivariate probability density function that displays curves for which the probability density function is constant.

• Convolution

A method to derive the probability density function of the sum of two independent random variables from an integral (or sum) of probability density (or mass) functions.

• Cook’s distance

In regression, Cook’s distance is a measure of the inluence of each individual observation on the estimates of the regression model parameters. It expresses the distance that the vector of model parameter estimates with the ith observation removed lies from the vector of model parameter estimates based on all observations. Large values of Cook’s distance indicate that the observation is inluential.

• Correlation

In the most general usage, a measure of the interdependence among data. The concept may include more than two variables. The term is most commonly used in a narrow sense to express the relationship between quantitative variables or ranks.

• Correlation coeficient

A dimensionless measure of the linear association between two variables, usually lying in the interval from ?1 to +1, with zero indicating the absence of correlation (but not necessarily the independence of the two variables).

• Correlation matrix

A square matrix that contains the correlations among a set of random variables, say, XX X 1 2 k , ,…, . The main diagonal elements of the matrix are unity and the off-diagonal elements rij are the correlations between Xi and Xj .

• Discrete random variable

A random variable with a inite (or countably ininite) range.

• Experiment

A series of tests in which changes are made to the system under study

• F distribution.

The distribution of the random variable deined as the ratio of two independent chi-square random variables, each divided by its number of degrees of freedom.

• F-test

Any test of signiicance involving the F distribution. The most common F-tests are (1) testing hypotheses about the variances or standard deviations of two independent normal distributions, (2) testing hypotheses about treatment means or variance components in the analysis of variance, and (3) testing signiicance of regression or tests on subsets of parameters in a regression model.

• Factorial experiment

A type of experimental design in which every level of one factor is tested in combination with every level of another factor. In general, in a factorial experiment, all possible combinations of factor levels are tested.

• Forward selection

A method of variable selection in regression, where variables are inserted one at a time into the model until no other variables that contribute signiicantly to the model can be found.

×