- 4.2.1E: In a binomial experiment, what does it mean to say that each trial ...
- 4.2.2E: In a binomial experiment with n trials, what does the random variab...
- 4.2.4E: Graphical Analysis The histograms shown below represent binomial di...
- 4.2.5E: Graphical Analysis The histograms shown below represent binomial di...
- 4.2.7E: Identify the unusual values of x in each histogram in Exercise.Exer...
- 4.2.8E: Identify the unusual values of x in each histogram in Exercise.Exer...
- 4.2.12E: Identifying and Understanding Binomial Experiments In Exercise, det...
- 4.2.13E: Mean, Variance, and Standard Deviation In Exercise, find the mean, ...
- 4.2.14E: Mean, Variance, and Standard Deviation In Exercise, find the mean, ...
- 4.2.15E: Mean, Variance, and Standard Deviation In Exercise, find the mean, ...
- 4.2.16E: Mean, Variance, and Standard Deviation In Exercise, find the mean, ...
- 4.2.37E: Multinomial Experiments In Exercise, use the information below.Gene...
- 4.2.38E: Multinomial Experiments In Exercise, use the information below.Gene...
Solutions for Chapter 4.2: Elementary Statistics: Picturing the World 5th Edition
Full solutions for Elementary Statistics: Picturing the World | 5th Edition
2 k p - factorial experiment
A fractional factorial experiment with k factors tested in a 2 ? p fraction with all factors tested at only two levels (settings) each
a-error (or a-risk)
In hypothesis testing, an error incurred by failing to reject a null hypothesis when it is actually false (also called a type II error).
A distribution with two modes
Data consisting of counts or observations that can be classiied into categories. The categories may be descriptive.
The tendency of data to cluster around some value. Central tendency is usually expressed by a measure of location such as the mean, median, or mode.
Any test of signiicance based on the chi-square distribution. The most common chi-square tests are (1) testing hypotheses about the variance or standard deviation of a normal distribution and (2) testing goodness of it of a theoretical distribution to sample data
Conditional probability mass function
The probability mass function of the conditional probability distribution of a discrete random variable.
Another term for the conidence coeficient.
A correction factor used to improve the approximation to binomial probabilities from a normal distribution.
A graphical display used to monitor a process. It usually consists of a horizontal center line corresponding to the in-control value of the parameter that is being monitored and lower and upper control limits. The control limits are determined by statistical criteria and are not arbitrary, nor are they related to speciication limits. If sample points fall within the control limits, the process is said to be in-control, or free from assignable causes. Points beyond the control limits indicate an out-of-control process; that is, assignable causes are likely present. This signals the need to ind and remove the assignable causes.
In regression, Cook’s distance is a measure of the inluence of each individual observation on the estimates of the regression model parameters. It expresses the distance that the vector of model parameter estimates with the ith observation removed lies from the vector of model parameter estimates based on all observations. Large values of Cook’s distance indicate that the observation is inluential.
Formulas used to determine the number of elements in sample spaces and events.
The value of a statistic corresponding to a stated signiicance level as determined from the sampling distribution. For example, if PZ z PZ ( )( .) . ? =? = 0 025 . 1 96 0 025, then z0 025 . = 1 9. 6 is the critical value of z at the 0.025 level of signiicance. Crossed factors. Another name for factors that are arranged in a factorial experiment.
Cumulative distribution function
For a random variable X, the function of X deined as PX x ( ) ? that is used to specify the probability distribution.
An experiment in which the tests are planned in advance and the plans usually incorporate statistical models. See Experiment
Distribution free method(s)
Any method of inference (hypothesis testing or conidence interval construction) that does not depend on the form of the underlying distribution of the observations. Sometimes called nonparametric method(s).
A study in which a sample from a population is used to make inference to the population. See Analytic study
Error sum of squares
In analysis of variance, this is the portion of total variability that is due to the random component in the data. It is usually based on replication of observations at certain treatment combinations in the experiment. It is sometimes called the residual sum of squares, although this is really a better term to use only when the sum of squares is based on the remnants of a model-itting process and not on replication.
Estimator (or point estimator)
A procedure for producing an estimate of a parameter of interest. An estimator is usually a function of only sample data values, and when these data values are available, it results in an estimate of the parameter of interest.
In statistical quality control, that portion of a number of units or the output of a process that is defective.